⬣PulseChain LIVE⬣ 💥

2.6K posts

⬣PulseChain LIVE⬣ 💥

⬣PulseChain LIVE⬣ 💥

@PulseChainLIVE

#AI #Cybersecurity #Linux #privacy

Middle of the GPU เข้าร่วม Ocak 2023
861 กำลังติดตาม1.9K ผู้ติดตาม
⬣PulseChain LIVE⬣ 💥 รีทวีตแล้ว
David Hendrickson
David Hendrickson@TeksEdge·
🚨 It’s Actually Happening! Jensen’s Vision of an Open Source Future is Here with @NVIDIA Nemotron-Cascade-2! Heavy testing is commencing. @NVIDIA Nemotron-Cascade-2 vs Qwen3.5-35B-A3B 30B total / 3B active MoE vs 35B total / 3B active MoE Same “intelligence density” size. Totally different results. ⚡ 🏆 CONTEST MEDALS • IMO 2025 → Gold (35/42 pts) • IOI 2025 → 439.3 vs 348.6 • ICPC World Finals → Gold (10/12 problems) 📊 BENCHMARK BEATDOWN • LiveCodeBench v6 → 87.2% (88.4% TIR) vs 74.6% 🔥 • ArenaHard v2 → 83.5% vs 65.4% (+18 pts) • AIME 2025 → 92.4% vs 91.9% • IFBench → 82.9% vs 70.2% Where Qwen fights back • SWE Verified → 60.5% vs 50.2% • Knowledge (MMLU-Pro / GPQA) → Qwen edge Bottom line Nemotron-Cascade-2 delivers higher reasoning density on math, coding & agentic tasks while being fully open-weight + 1M context. NVIDIA’s Cascade RL + Multi-Domain Distillation is the cheat code. Open model that actually wins gold medals in 2025 competitions. Which are you loading first? 👀👇
David Hendrickson tweet media
Wei Ping@_weiping

🚀 Introducing Nemotron-Cascade 2 🚀 Just 3 months after Nemotron-Cascade 1, we’re releasing Nemotron-Cascade 2: an open 30B MoE with 3B active parameters, delivering best-in-class reasoning and strong agentic capabilities. 🥇 Gold Medal-level performance on IMO 2025, IOI 2025, and ICPC World Finals 2025: • Capabilities once thought achievable only by frontier proprietary models (e.g. Gemini Deep Think) or frontier-scale open models (i.e. DeepSeek-V3.2-Speciale-671B-A37B). • Remarkably high intelligence density with 20× fewer parameters. 🏆 Best-in-class across math, code reasoning, alignment, and instruction following: • Outperforms the latest Qwen3.5-35B-A3B (2026-02-24) and even larger Qwen3.5-122B-A10B (2026-03-11). 🧠 Powered by Cascade RL + multi-domain on-policy distillation: • Significantly expand Cascade RL across a much broader range of reasoning and agentic domains than Nemotron-Cascade 1, while distilling from the strongest intermediate teacher models throughout training to recover regressions and sustain gains. 🤗 Model + SFT + RL data: 👉 huggingface.co/collections/nv… 📄 Technical report: 👉 research.nvidia.com/labs/nemotron/…

English
1
4
30
2.4K
⬣PulseChain LIVE⬣ 💥 รีทวีตแล้ว
Dr. Clown, PhD
Dr. Clown, PhD@DrClownPhD·
We are so fvcked...
Filipino
475
914
7K
677.7K
OpenCode
OpenCode@opencode·
MiniMax M2.7 available in Go - Better at complex tasks over M2.5 - Fast - give it a plan and it runs with it - Self-evolution - do a task > check results > fix mistakes > try again
English
57
46
1.8K
127.4K
⬣PulseChain LIVE⬣ 💥 รีทวีตแล้ว
alexintosh
alexintosh@Alexintosh·
Wanted to try SSD streaming but did not have enough disk for 397B-A17B. 7.2 tok/s for the 35B-A3B on M2 macbook air. wild.
Dan Woods@danveloper

x.com/i/article/2034…

English
16
31
427
64.5K
⬣PulseChain LIVE⬣ 💥
⬣PulseChain LIVE⬣ 💥@PulseChainLIVE·
@steeve First time I read about your project. When you release sm121 I will test it on DGX Spark and post here benchmark results. Thanks.
English
0
0
0
23
Steeve Morin
Steeve Morin@steeve·
Yesterday I built 33 000 flash attention kernels in about 4 minutes from sm80 to sm120. For x86_64 and arm64. From my mac. Bazel is wild man.
English
3
3
66
7.6K
⬣PulseChain LIVE⬣ 💥
⬣PulseChain LIVE⬣ 💥@PulseChainLIVE·
🤬🤬🤬
klöss@kloss_xyz

let me explain the ramifications of this… → 150,000 people just got locked out of their own cars… across 46 states… for 6 days straight and counting → not a software bug. not a glitch. not AI permissions gone wrong. → hackers flooded Intoxalock’s servers and all these vehicles just stopped starting… → these are court ordered breathalyzer devices… people who messed up in the past but have been doing everything right since (hopefully)… and now they can’t drive to work because someone else’s security system failed wild connect the dots… your electric car talks to a server to start. one breach and it’s a 50,000 dollar paperweight your insulin pump syncs to a server. your pacemaker data lives on a server. one breach and it’s not a car that stops working… it’s a body your smart home lock runs through a server. one breach and your front door either won’t open or won’t close now zoom out… Gartner projects $2.5 trillion going into AI this year… only $240 billion into securing the systems it runs on. that’s a 10 to 1 bet that nothing goes wrong the four biggest tech companies (Alphabet, Microsoft, Meta, and Amazon) are rumored to spend $700 billion on AI infrastructure this year alone… while cybercrime is projected to cost the world $10.5 trillion now imagine this happens to Tesla. to a hospital network. to the power grid… every new AI integration is a new attack surface. every API is a new door. every device that “talks to the cloud” is one more thing that can be turned off by someone you’ll never meet and I’m not saying every one of these systems will experience something who really knows what’s secure or isn’t but if you’re building right now… security isn’t the last layer you add. it’s the first one. → 150,000 people have just found out what happens when nobody prioritizes that… archaic government systems and legacy businesses are likely first on the chopping block I hope the rest of us continuously learn from it instead of living it the weakest link in every system is the one nobody bothered to secure like what wild system vulnerability will we see next? does someone hack Area 51?

ART
0
0
0
46
David Hendrickson
David Hendrickson@TeksEdge·
The Ultimate 128GB Local AI Hardware Battle 🥊💻 Judging Qwen3.5-27B (Bartowski IQ4_NL) on top unified-memory machines: 1️⃣ @AMD Strix Halo (Ryzen AI Max+ 395)💰 ~$2,500 | 🚀 9 - 12 tps (decode) | 🎮 Full Windows AAA gaming 🏆 Speed + value king. 🖕 2️⃣ @Apple Mac Studio M3 Ultra💰 ~$5,000 | 🚀 8–12 tps | 🍎 Apple macOS & ecosystem w/solid speeds; limited AAA gaming 3️⃣ @NVIDIA DGX Spark (GB10 Blackwell)💰 $4,699 | 🚀 ~10 tps (~20 tps x2 node) | 🐧 Linux/AI research only w/strong prefill + nerfed decode bandwidth-limited. Difficult pooling (new cables may fix). AAA gaming not optimized for Grace Verdict: AMD wins for most power users and best speed/price/gaming combo. (Community benchmarks; YMMV with setup/context) Which are you buying? 👇
David Hendrickson tweet mediaDavid Hendrickson tweet mediaDavid Hendrickson tweet media
English
50
30
350
65.3K
⬣PulseChain LIVE⬣ 💥
⬣PulseChain LIVE⬣ 💥@PulseChainLIVE·
@spark_arena @TeksEdge @AMD @Apple BTW, Asus released a week ago an updated version of DGX OS recovery .iso that includes NVIDIA AI Workbench and seem to have better drivers integration than before. NVIDIA has still the 6 months old .iso on their website.
English
0
0
0
18
⬣PulseChain LIVE⬣ 💥
⬣PulseChain LIVE⬣ 💥@PulseChainLIVE·
@spark_arena @TeksEdge @AMD @Apple Look at the data not the conclusions of that video and storagereview article. Asus gx10 consumes 10% less for the same speeds, has full copper heat sink, the unit is 20% heavier. I got GX10 GPU at 80W for inference with no thermal issues and its much cheaper. For me GX10 wins.
⬣PulseChain LIVE⬣ 💥 tweet media⬣PulseChain LIVE⬣ 💥 tweet media⬣PulseChain LIVE⬣ 💥 tweet media
English
0
0
1
55
⬣PulseChain LIVE⬣ 💥
⬣PulseChain LIVE⬣ 💥@PulseChainLIVE·
@briancaffey @Teknium I tested Nemotrone a few days ago, so I`m not 100% sure that I got 10 t/s or more but I tried a REAP version of MiniMax m2.5 yesterday and got 10 t/s and that`s about the same size but not MOE so you are right.
English
0
0
2
82
Brian Caffey
Brian Caffey@briancaffey·
@PulseChainLIVE @Teknium I’m getting about double that on my Spark, unless you aren’t counting <thinking> @PulseChainLIVE but I agree that smaller optimized models with higher concurrency are more fun :D
Brian Caffey tweet media
English
1
0
1
40
Teknium (e/λ)
Teknium (e/λ)@Teknium·
Just got an Nvidia Spark setup. Hermes Agent installed without any issues. Now lets see what model it should be powered by 😉
English
38
7
288
12.9K
⬣PulseChain LIVE⬣ 💥
⬣PulseChain LIVE⬣ 💥@PulseChainLIVE·
@Teknium for compiling llama.cpp you need to find the best flags for maximum performance, this is what worked for me and when you launch use these x.com/PulseChainLIVE… --no-mmap is very important for smooth loading of large models in unified memory.
⬣PulseChain LIVE⬣ 💥 tweet media⬣PulseChain LIVE⬣ 💥 tweet media⬣PulseChain LIVE⬣ 💥 tweet media⬣PulseChain LIVE⬣ 💥 tweet media
English
0
0
1
164
Teknium (e/λ)
Teknium (e/λ)@Teknium·
Getting Hermes ready to work with the spark over here
Teknium (e/λ) tweet media
English
8
3
115
14.3K