Jose Miguel

443 posts

Jose Miguel

Jose Miguel

@SiuIsHere

Ruta a casa

Katılım Ocak 2009
204 Takip Edilen30 Takipçiler
Jose Miguel
Jose Miguel@SiuIsHere·
@Prakhar6200 I don't understand it either. After spending a month with a Battleimage GPU, the problem is the software and compatibility.
English
1
0
0
85
Sudo su
Sudo su@sudoingX·
what VRAM are you working with? i'm planning my next round of benchmarks and i want to test what matters for YOUR hardware. drop your exact GPU below. model, quant, what you're running. i'll tell you if there's something better for your setup.
English
135
3
92
38.9K
Jose Miguel
Jose Miguel@SiuIsHere·
@sudoingX How can I get similar results using a Intel Arc B50 Pro with 16 GB VRAM? I tried without any result since qwen3.5-14B cannot quantisize the KV in this GPU :(
English
0
0
0
62
David Hendrickson
David Hendrickson@TeksEdge·
🚨 Big News From @Intel for Local / Sovereign AI on Arc Pro B70🔥 📟 Huge memory boost on Windows as up to 93% of system RAM can now be dynamically allocated to the 👉 iGPU 👈 (e.g. ~59GB on a 64GB system) 📈 @Intel just released its Q1 2026 Arc Pro Driver (v32.0.101.8515), and it’s important for affordable high-VRAM inference. What’s New: Official full support for the new Arc Pro B70 (32GB GDDR6) and B65 discrete GPUs 💎 Better support for built-in Arc Pro graphics in Core Ultra processors 💵 Price of the B70 Just $949, which is roughly 33% to 25% the cost of comparable NVIDIA RTX Pro 4000 / 5000-class cards with similar VRAM. 👀 Why This Matters So Much? 1. Competition for AMD and Nvidia 2. This is Intel finally getting serious about the local AI inference market 3. Until now, getting 32GB+ VRAM for local LLMs usually meant buying expensive NVIDIA cards (5090) or very pricey new pro cards The Arc Pro B70 brings 32GB VRAM at a consumer-friendly price, which holds bigger models such as 70B+ Q4 models, MoE models, and long-context agents on their own hardware without massive cloud bills or hunting for expensive used 3090s. Intel claims strong gains vs NVIDIA RTX Pro 4000 💰 Up to 2x tokens per dollar (🐧) 🪟 Up to 2.2x larger context windows (🐧) 🪙 Up to 85% higher token throughput (🐧) ⚡️ Up to 6.2x faster Time-to-First-Token (especially in multi-user/agent scenarios) (🐧) Does it work on Windows & Linux? ✅ Yes, both platforms supported Full WHQL driver for Windows 10/11 Linux support included (Ubuntu confirmed) with a strong oneAPI/OpenVINO ecosystem How does it speed up LLMs without CUDA? 1⃣ Intel doesn’t use CUDA. Instead, it uses its own stack 2⃣ oneAPI + OpenVINO (highly optimized inference toolkit) 3⃣ IPEX-LLM (Intel’s PyTorch extension for fast LLM acceleration) 4⃣ vLLM with XPU backend (native Intel GPU support) 📟 The card has 256 XMX engines (dedicated AI matrix units) that are very efficient at BF16 / INT8 quantized models. 📰 Bottom Line This driver + B70 launch is Intel’s clearest signal yet that they want a real slice of the exploding personal sovereign AI hardware market. For the first time, 32GB-class local inference is becoming genuinely affordable and accessible. Huge potential step forward for private, on-device AI. What do you think? Is this a legitimate change for local LLMs or still waiting on real benchmarks from @digitalix? 👇
David Hendrickson tweet media
English
3
4
47
7.5K
Tom Turney
Tom Turney@no_stp_on_snek·
Working on a diagnostic script to benchmark TurboQuant across different hardware. Already have solid coverage on Apple Silicon (M5 Max 128GB, M1 Max 64GB) from folks in the community, but need more variety, especially NVIDIA GPUs. If you’ve got a different setup and want to help test, reach out.
English
16
1
23
2.6K
Sudo su
Sudo su@sudoingX·
i hear you. been hearing this from DMs, replies, and threads for weeks. everyone running local models is stuck digging through tweets and reddit posts for configs that should be in one place. that's why i've been collecting data. the polls, the "drop your GPU" threads, every benchmark i've published. it's not random content. it's the foundation for something i'm building. a place where you search by GPU or model and get verified configs, tok/s, VRAM usage and exact flags. community submissions. real numbers from real hardware. no influencer guesses. no spec sheet theory. tested data you can copy-paste and run. i'm close. more data comes in every day. more people are contributing. if you have a 3090 and can't get it running well, that's exactly the problem this solves. patience. it's coming.
Joseph Sauvage@JoesInvestments

I don’t understand why there isn’t some sort of central repository for optimized cards, specs, and configurations. I hear everybody talking about local AI on Nvidia GPUs, yet I can’t get my 3090 running well at all. It’s quite fatiguing, in fact. Meanwhile, people like you who contribute immensely to the community seem to have all the answers, but I can’t find them anywhere. It’s a very strange situation .

English
17
4
173
8.7K
Sudo su
Sudo su@sudoingX·
if you're planning a new build or figuring out your architecture, post it in the community. we'll help you get the most out of your hardware. the goal is to help each other own our compute and stop feeding prompts to corporations mining your cognition.
English
5
0
38
3.2K
Sudo su
Sudo su@sudoingX·
i just became a mod of x/LocalLLaMA. if you're running local models on your own hardware and want in, the community is open. pinned and highlighted on my profile. approving members starting today. drop your setup below and i'll get you in. 3060, 3090, 4090, 5090, AMD, whatever you're running. all welcome. if you're hitting issues with hermes agent, llama.cpp, model selection, configs, i'm here. let's make local AI accessible for everyone.
Sudo su tweet media
Sudo su@sudoingX

let me get you started in local AI and bring you to the edge. if you have a GPU or thinking about diving into the local LLM rabbit hole, first thing you do before any setup is join x/LocalLLaMA. this is the community that will help you at every step. post your issue and we will direct you, debug with you, and save you hours of work. once you're in, follow these three: @TheAhmadOsman the oracle. this is where you consume the latest edges in infrastructure and AI. if something dropped you hear it from him first. his content alone will keep you ahead of most. @0xsero one man army when it comes to model compression, novel quantization research, new tools and tricks that make your local setup better. you will learn, experiment, and discover things you didn't know existed. @Teknium maker of Hermes Agent, the agent i use every day from @NousResearch. from Teknium you don't just stay at the frontier, you get your hands on the tools before everyone else. this is where things are headed. if you follow me follow these three and join the community. you will be ahead of most people in this space. if you run into wrong configs, stuck debugging hardware, or can't get a model to load, post there so we can help. get started with local AI now. not only understand the stack but own your cognition. don't pay openai fees on top of giving them your prompts, your research, and your most valuable thinking to be monitored and metered. buy a GPU and build your own token factory.

English
325
42
809
61.1K
Jose Miguel
Jose Miguel@SiuIsHere·
@sudoingX In fact, I'm having trouble finding a support community for OpenVino with an Arc Pro B50. Updates to the latest models for this GPU are very slow ☹️
English
0
0
0
81
Sudo su
Sudo su@sudoingX·
hey if you're running hermes agent on a 3060 or any single GPU and hitting issues, drop them below. i've tested on this exact card and i'll help you get it running. setup problems, config issues, model selection, optimization. all welcome.
Magical truth-saying Bastard Spider 🕷@Ysrthgrathe42

@sudoingX Framework desktop 96gb allocated but have been spending more time trying to get Hermes agent running as reported on a rtx3060 on another machine.

English
17
1
109
10.2K
Alcaldía La Libertad Sur
Alcaldía La Libertad Sur@SantaTeclaSV·
¡Inscríbete a la capacitación de oratoria! Aprenderás a hablar con naturalidad y seguridad, controlar tus nervios, lenguaje corporal, estructurar tus ideas con claridad, mejorar tu tono de voz y dicción. 📍 Café Palacio. ✅ $25 (desayuno incluido). Inscripción: 7747-6100.
Alcaldía La Libertad Sur tweet mediaAlcaldía La Libertad Sur tweet mediaAlcaldía La Libertad Sur tweet media
Español
1
0
0
217
Lignite Watchfaces
Lignite Watchfaces@TeamLignite·
We will offer the option to hide the shadow in the next update 😉
English
1
0
1
185
Jose Miguel
Jose Miguel@SiuIsHere·
@VisualStudio It does not work for me: {"error":{"message":"Client specified an invalid argument","code":"invalid_request_body"}} Visual Studio Community 2022, v17.14
English
0
0
1
91
Visual Studio
Visual Studio@VisualStudio·
Big news for developers! Grok Code Fast 1 is now available in Visual Studio. This advanced AI model brings smarter, faster coding assistance right into your favorite IDEs via GitHub Copilot Chat. Available in public preview for Copilot Pro, Pro+, Business, and Enterprise plans, Grok Code Fast 1 supports agent, ask, and edit modes, making coding more intuitive and efficient. 👉 Admins: Enable the Grok Code Fast 1 policy in Copilot settings to unlock access for your team. 🔗 Read the full changelog to learn more and get started: msft.it/6016sIhco #AIcoding #GitHubCopilot
Visual Studio tweet media
English
214
464
3.1K
9.8M
Jose Miguel
Jose Miguel@SiuIsHere·
@ericmigi LCD_DOT, but I think the original developer is needed to check about the weather info
English
0
0
0
27
Eric Migicovsky
Eric Migicovsky@ericmigi·
post name/link to your fav watchface and I will try out some on PT2
English
25
3
46
6.5K
Eric Migicovsky
Eric Migicovsky@ericmigi·
Mindblowingly cool breakthrough - we made it so all existing Pebble watchfaces/apps now automatically scale up to fill larger and higher resolution Pebble Time 2 display!
Eric Migicovsky tweet media
English
23
34
507
17.2K
GOG.COM
GOG.COM@GOGcom·
HOLLOW KNIGHT: SILKSONG IS OUT NOW!!! 👉 bit.ly/Silksong_GOG There’s nothing more that needs to be said. Grab this masterpiece DRM-free on GOG and keep it yours forever. The whole gaming world has been waiting for this day, and it’s finally here 💜 @TeamCherryGames
English
37
379
3K
121.8K
Jose Miguel
Jose Miguel@SiuIsHere·
Y entonces, concluimos que también somos una inteligencia artificial?
Español
0
0
1
55
Jose Miguel retweetledi
MURPHSLIFE
MURPHSLIFE@MURPHSLIFE·
I made a YouTube channel for our community and farm ALL in Spanish here in beautiful El Salvador. If you’re looking to support, please give it a follow youtu.be/LPcvz--TSOg
YouTube video
YouTube
English
21
70
332
15.7K
Jose Miguel
Jose Miguel@SiuIsHere·
@njelsalvador @rivenalvarez @MarkOfBitcoin Practical approach to massive adoption: let's pay public transport in SATs. Let the people use Bitcoin as any other currency. Normal citizens don't keep assets, just live day by day.
English
1
0
2
25
⚡️Nicki & James in El Salvador 🇸🇻
@rivenalvarez @MarkOfBitcoin Great question. And challenging to answer. There is a general lack of knowledge around what money is. People are generally lazy. People are generally skeptical, ironically often more of things that are good. People often need pain to change. Lead by example is a great start.
English
1
0
2
15
Mark of Bitcoin
Mark of Bitcoin@MarkOfBitcoin·
I can confirm that Berlin is in fact the most Bitcoin-friendly, widely-adopted circular economy in El Salvador, and I would guess the world. When you can buy a Papusa and have a Bitcoin conversation with a street vendor at 3pm on a weekday, THAT is adoption.
Joe Nakamoto ⚡️@JoeNakamoto

🎥 ⚡️ A *GENUINE* BITCOIN CITY EXISTS IN EL SALVADOR 🎥 ⚡️ No, not that 17 billion dollar "bitcoin city" Bukele promised you all. LET ME SHOW YOU 👇 👀

English
1
0
3
174
Jose Miguel
Jose Miguel@SiuIsHere·
Recibi esta comunicado de mi Proveedor de Internet y parece legitimo :( No me quejo del servicio, la verdad ha funcionado bien, pero parece coincidir con el aumento al salario minimo y no se vale. Mejor incrementen sus ingresos vendiendo mas, lo cual creo que si ha sucedido.
Jose Miguel tweet media
Español
0
0
1
65