Victor Laynez

11.3K posts

Victor Laynez

Victor Laynez

@roteno

Builder of RF embedded sensors & systems for work and play!

DC, Austin, Fort Collins Katılım Aralık 2007
837 Takip Edilen1.9K Takipçiler
Victor Laynez
Victor Laynez@roteno·
@jay_k Any chance you solved a 60second timeout I’m bumping into for anything local? SW only here and anything above 60, gets error red out. Client side timeout it seems…
English
0
0
0
3
Jay-K
Jay-K@jay_k·
🦞 OpenClaw on local models has been an absolute grind, but I finally got my Radeon 6750 XT actually using the GPU properly. Running everything on Ubuntu with Ollama now. No more paying for API calls. No more watching the CPU melt while the GPU sits there doing nothing. Here's how it went down. My current setup: AMD Radeon RX 6750 XT (12GB), Ubuntu Linux, Ollama backend, OpenClaw handling the agent stuff (Telegram integration, tools, etc.). I just wanted something that runs fully offline and is actually usable day-to-day. The first attempts were rough. Tried a couple smaller models that kinda worked but felt outdated and slow. Moved to Qwen3.5 more modern model still the GPU was barely waking up. radeontop sitting at like 10%, CPU going nuts 200%.. Everything felt laggy. AMD + local LLMs can be brutal sometimes. The turning point was switching to Ollama on Ubuntu. Spent a ton of time tweaking the model settings and cleaning up my OpenClaw config. Pushed as many layers as I could to the GPU, adjusted context and batch sizes to fit what OpenClaw needs. Kept staring at the monitors… and finally the GPU started pulling real load. That moment when utilization actually jumped? So satisfying. Responses feel way snappier now. Tool calling is reliable, everything stays local. 12GB isn't massive, but with the right quantized models this card can actually run a proper local agent. Not gonna lie – it's still not plug-and-play. Linux + Ollama + AMD needs some driver fiddling and careful tuning (ROCm stuff, model choices, etc.). Bigger models would want more VRAM, but this setup shows consumer GPUs can handle real self-hosted agents. This whole thing took around 12 hours of debugging and tweaking. Totally worth it though. Zero token costs, everything private, and my own hardware is finally doing the work instead of just sitting there. Anyone else running OpenClaw with Ollama on AMD cards (especially 6750 XT or similar RDNA2 stuff)? Use what you got can always upgrade later.. What's your setup like? Any tips or gotchas I should know about? Leave a reply "OpenClaw Setup" if you want my config settings #OpenClaw #LocalLLM #Ollama #Radeon #RX6750XT #SelfHostedAI #Ubuntu
English
0
0
0
73
John
John@AgenticCowboy·
Google just cooked with Gemma 4. The new 31B model is already live on Ollama and holding its own against massive 397B-parameter models on the Arena leaderboard. For OpenClaw users, this is a perfect local agentic option. No API keys, full privacy, runs great on a single GPU. Just run: ollama pull gemma4:31b Local AI keeps getting stronger.
Google DeepMind@GoogleDeepMind

Available in four sizes: 🔵 31B Dense & 26B MoE: state-of-the-art performance for advanced local reasoning tasks – like custom coding assistants or analyzing scientific datasets. 🔵 E4B & E2B (Edge): built for mobile with real-time text, vision, and audio processing.

English
1
0
1
88
Victor Laynez
Victor Laynez@roteno·
Got OpenClaw and local OLLAMA setup in docker containers but I can’t get around this 60s GIN timeout… even with hack patching. As responses take more than 60s, how does open claw work? @steipete @openclaw I have to have done something fundamentally silly or folks are not using this with locally hosted SW LLMs on modest machines
English
0
0
0
25
Victor Laynez
Victor Laynez@roteno·
60 second timeout on the OpenClaw side no matter what I do.
English
0
0
0
81
Victor Laynez
Victor Laynez@roteno·
I seem to be stuck. Webchat/Webgui OpenClaw and now a channel (I used matrix)… 60 second GIN 500 timeout with anything related to OpenClaw. Large token curls to the locally hosted LLM work just fine > 60s first token.
English
0
0
0
55
Victor Laynez
Victor Laynez@roteno·
Is there an understanding not to use the webchat if you can’t get a first token response in 60sec’? I suspect I’m hitting a bug of my own creation. :)
English
0
0
0
33
Victor Laynez
Victor Laynez@roteno·
OpeClaw hobbyists: am I to believe that a 60s limit for locally hosted Ollama (small) models on a SW only machine has a client/session/who knows 60s timeout constraint with WebChat? Does my setup require a channel (like Signal) and no-go for webchat (the UI)? #OpenClaw
English
1
0
0
76
Victor Laynez
Victor Laynez@roteno·
@davepl1968 Did you run into tool use issues with R1? I had it early on but might go back to R1 to try it out. Could have been issues in my end too.
English
0
0
0
183
Dave W Plummer
Dave W Plummer@davepl1968·
OK, I installed OpenCLAW. I set it up with a backend AI server on a Dell 7875 workstation with dual Blackwell RTX6000 GPUs running DeepSeek R1 32B. And it can fall down to Qwen 2.5 for easy stuff. I created an agent to scour eBay for things I might be interested in, compile a summary, and mail it to myself. But I could have done that in shell script... so... what now? What are folks doing with it that's interesting? What are you having your agents do?
Dave W Plummer tweet media
English
338
38
1K
184.3K
Victor Laynez
Victor Laynez@roteno·
We are still very much in the plumbing of this technology cycle. The larger difference in this era is that the Engineers/designers/technologists have a global audience. If you are an Engineer right now in R&D, read the above twice.
English
0
0
1
56
Victor Laynez
Victor Laynez@roteno·
I have an old machine that I’m going to host #openclaw for a spell, point to ChatGPT. Any fun use cases you’ve explored for your personal life? Professional? Thoughts?
English
0
0
0
102
Victor Laynez
Victor Laynez@roteno·
@DabsMalone “Never” and “always” are dangerous words in Engineering. Unless we are talking physics, we don’t throw around those words these days ;)
English
0
0
0
22
Dabs🩸
Dabs🩸@DabsMalone·
AI gives everyone answers instantly. But engineering was never about answers. It’s about understanding the system well enough to ask the right questions. You will never vibe code an additive manufacturing app better than I could, because I know what problems need to be solved from real world experience. Without that… the smartest AI in the world just makes you confidently wrong.
English
20
3
65
2.3K
Victor Laynez
Victor Laynez@roteno·
@chris_j_paxton I’m sure others will say the same. Plenty infr in the world for bipeds. First to scale could have a serious advantage. Also, generalization vs specialization pendulum is in the former… likely to stay there for a while.
English
0
0
0
42
Chris Paxton
Chris Paxton@chris_j_paxton·
If youre truly ai pilled how could you square that with humanoid robots? I feel like ai-assisted cad + manufacturing + cross embodiment learning + high fidelity simulation would allow for an infinite profusion of diverse robots for different ecological niches
English
31
4
122
11.2K
Victor Laynez
Victor Laynez@roteno·
@BackwoodsEnginr I also like to teach two bounds and their context, kTB and the total output power of the sun. Helps students check their math to ensure they don’t accidentally write 500dBW on any reasonable problem. :)
English
0
0
0
18
Victor Laynez
Victor Laynez@roteno·
A brief history of trends toward Shannon’s limit is both fun and useful to understand. Un-encoded to Voyager (Conv+RS) to TPC (if like to offer some drama) to LDPC. An easy 30min that should help EEs entering comm theory. Might need to talk about noise/channel models first… just a bit.
English
2
0
1
138
Backwoods Engineer - THE ORIGINAL
Backwoods Engineer - THE ORIGINAL@BackwoodsEnginr·
Alright, EEs. I'm teaching a Fundamentals of Wireless Comm Engineering class next spring. It's a survey class but I want hands-on. What should I include? What's your alma mater do? After y'all sound off, I'll chime in, but I don't want to bias y'all (heh) yet. GO!
Backwoods Engineer - THE ORIGINAL tweet media
English
38
8
198
10.9K
Victor Laynez
Victor Laynez@roteno·
I have not purchased from @adafruit in far too long! What a great company exposing students, hobbyists, and many more to electronics. Kudos for everything you do. Looking forward to a few small boards to make some home assistant controlled sleep noise generators for kids. Might even try some generative approaches for fun.
English
1
0
1
88
Victor Laynez
Victor Laynez@roteno·
Anyone following me self hosting an LLM? Small, large, both? Any cautions if I decide to embark on this project? Thinking about a smaller ~3B smart home control model and perhaps the Deep Seek 16B for the professor persona…. A few other odds and ends for fun.
English
0
0
0
64
Victor Laynez
Victor Laynez@roteno·
Happy New Year (2026) my East Coast family, friends, colleagues, and fellow Engineers. Build the future you want to be a part of. It’s the tougher path but how lucky are we that we can do just that.
English
0
0
1
67
Victor Laynez
Victor Laynez@roteno·
@grok @ChatGPTapp I’ve seemingly entered an infinite loop of support. I believe this one is the secret to perpetual motion. Not quite a nicer looking drawing of the original I provided.
English
1
0
0
12
Victor Laynez
Victor Laynez@roteno·
AI pros: I’m using @grok and @ChatGPTapp to design an outdoor antenna. Fun experience and it’s so far spot on with theory and practice. The antenna has a few support posts on our property. The text description and the first order performance analysis is spot on. However……. If I ask it for a simple top view diagram, the post locations, associated geometry, wire segments, and even labels I’ve assigned are all wrong. It’s as if the image generation has no comprehension of the context window. Can someone explain what’s going on here? The images are so wildly incorrect it’s as if it’s just drew random lines and squares and labels
English
1
0
0
75