vmiss

33.3K posts

vmiss banner
vmiss

vmiss

@vmiss33

Independent Technology Analyst & Enterprise Architect. AI Infrastructure. Electrical Engineer. #VCDX-236 @e2eea_global. ransomware is a disaster.

USA Katılım Ocak 2010
5.5K Takip Edilen12.4K Takipçiler
Sabitlenmiş Tweet
vmiss
vmiss@vmiss33·
The Data Center Local Frictions Report – December 2025 provides comprehensive intelligence on community and regulatory resistance facing data center development across the United States. This monthly analysis examines news articles to identify locality specific conflicts, revealing that nearly half of all data center coverage involves local opposition. The report breaks down resistance by conflict type and identifies geographic hotspots where development faces the strongest headwinds. This is essential reading for infrastructure planners, data center developers, infrastructure investors, utility planners, and policy makers navigating the complex landscape of AI-driven data center growth and its local impacts. theinfrastructureconstellation.com/data-center-lo…
vmiss tweet media
English
1
0
8
2.7K
Vikas Singh
Vikas Singh@Vikas_bril·
@ThoughtfulTechy Solid advice, but I'd add nuance. GPU/CUDA knowledge is valuable but most builders won't need it hands-on. What matters more is understanding AI systems architecture and tradeoffs. Learning NVIDIA stack is great but don't skip the higher-level thinking.
English
1
0
4
636
Greg Powell
Greg Powell@ThoughtfulTechy·
Learn NVIDIA tech ASAP. The engineers who understand GPUs, CUDA, and AI factories will run the next decade of the AI era.
English
27
189
1.9K
64.9K
vmiss
vmiss@vmiss33·
@AutismCapital You’re safe, Claude won’t write those shitty dark romance novels she loves
English
0
0
5
212
vmiss
vmiss@vmiss33·
@MaMoMVPY Right on. That's because no one bothers understanding what makes Claude "work", if someone spent 5 minutes learning about the transformer architecture they would understand you, but I'll bet most will argue with you.
English
1
0
4
907
vmiss retweetledi
0xSero
0xSero@0xSero·
I'm back on my vllm-studio grind, we need better chat for local models, I am going to implement Mario's Pi into vllm-studio chat. By April 7th you will be able to: 1. Download top hardware compatible 2. One click deploy into Hermes Agent 3. Monitor your Hermes Agent
0xSero tweet media0xSero tweet media0xSero tweet media0xSero tweet media
English
5
11
221
7.1K
vmiss
vmiss@vmiss33·
I'm def not falling for the latest Claude hype like I did with Claude Code and wasted 200 bucks.
English
0
0
5
724
Moulay Amine Jaidi
Moulay Amine Jaidi@Semantichasm·
🚀 Jailbreaking LLMs Our recent research has explored how #adversarialai can be used to bypass refusal responses. Using @HotAisle Solo @AIatAMD #MI300X, we've auto-assessed #LLM using various approaches and successfully circumvented model guardrails of open-source models.
Moulay Amine Jaidi tweet mediaMoulay Amine Jaidi tweet media
English
3
0
1
669
vmiss
vmiss@vmiss33·
Just finished a @PalantirTech Foundry Speed Run, the possibilities with this stuff are endless.
vmiss tweet media
English
0
0
1
250
vmiss
vmiss@vmiss33·
I generally don't go to movies, but I read Project Hail Mary and loved it. Will I be disappointed if I go see it?
English
19
1
18
3.1K
vmiss
vmiss@vmiss33·
New insult? You just aren't worth the tokens...
GIF
English
0
0
14
395
vmiss retweetledi
Hot Aisle
Hot Aisle@HotAisle·
A previously empty, but now totally full $AMD neocloud must mean that it isn't all that bad, right?
English
1
2
32
3K
vmiss
vmiss@vmiss33·
@chadwahl @MeekMill Oh wow I didn’t know there was a free developer tier account until now
English
1
0
12
2.8K
MeekMill
MeekMill@MeekMill·
I need to use plantir For like a hr!!!
English
440
1.3K
10.8K
2.6M
Sudo su
Sudo su@sudoingX·
i just became a mod of x/LocalLLaMA. if you're running local models on your own hardware and want in, the community is open. pinned and highlighted on my profile. approving members starting today. drop your setup below and i'll get you in. 3060, 3090, 4090, 5090, AMD, whatever you're running. all welcome. if you're hitting issues with hermes agent, llama.cpp, model selection, configs, i'm here. let's make local AI accessible for everyone.
Sudo su tweet media
Sudo su@sudoingX

let me get you started in local AI and bring you to the edge. if you have a GPU or thinking about diving into the local LLM rabbit hole, first thing you do before any setup is join x/LocalLLaMA. this is the community that will help you at every step. post your issue and we will direct you, debug with you, and save you hours of work. once you're in, follow these three: @TheAhmadOsman the oracle. this is where you consume the latest edges in infrastructure and AI. if something dropped you hear it from him first. his content alone will keep you ahead of most. @0xsero one man army when it comes to model compression, novel quantization research, new tools and tricks that make your local setup better. you will learn, experiment, and discover things you didn't know existed. @Teknium maker of Hermes Agent, the agent i use every day from @NousResearch. from Teknium you don't just stay at the frontier, you get your hands on the tools before everyone else. this is where things are headed. if you follow me follow these three and join the community. you will be ahead of most people in this space. if you run into wrong configs, stuck debugging hardware, or can't get a model to load, post there so we can help. get started with local AI now. not only understand the stack but own your cognition. don't pay openai fees on top of giving them your prompts, your research, and your most valuable thinking to be monitored and metered. buy a GPU and build your own token factory.

English
327
43
816
59.9K
vmiss
vmiss@vmiss33·
@marcf999 Absolutely cooked. Data center moratoriums all over the country.
English
0
0
1
27
vmiss
vmiss@vmiss33·
You think the run on GPUs is bad, just wait till everyone starts looking at the progress of all those data centers being built...
English
2
0
6
314
vmiss
vmiss@vmiss33·
@srai009 Great resources, thank you for sharing! I'll take a look.
English
0
0
0
18
SR
SR@srai009·
@vmiss33 Do it, it's a great way to spend the weekend. And I'd recommend the articles in this post as part of the reading: x.com/RadicalNumeric…
Radical Numerics@RadicalNumerics

Scaling scientific world models requires co-designing architectures, training objectives, and numerics. Today, we share the first posts in our series on low-precision pretraining, starting with NVIDIA's NVFP4 recipe for stable 4-bit training. Part 1: radicalnumerics.ai/blog/nvfp4-par… Part 2: radicalnumerics.ai/blog/nvfp4-par… We cover floating point fundamentals, heuristics, custom CUDA kernels, and stabilization techniques. Future entries will cover custom recipes and results on hybrid architectures.

English
1
0
1
49
vmiss
vmiss@vmiss33·
Please someone stop me from spending the weekend studying qunantization.
English
3
0
6
442
vmiss
vmiss@vmiss33·
@HotAisle I feel like that would overheat after swapping 2-3 serials
English
1
0
1
62