🤦🏻‍♂️🤦🏻‍♂️TheDUMBESTguyInAi🤦🏻‍♂️🤦🏻‍♂️

3.2K posts

🤦🏻‍♂️🤦🏻‍♂️TheDUMBESTguyInAi🤦🏻‍♂️🤦🏻‍♂️ banner
🤦🏻‍♂️🤦🏻‍♂️TheDUMBESTguyInAi🤦🏻‍♂️🤦🏻‍♂️

🤦🏻‍♂️🤦🏻‍♂️TheDUMBESTguyInAi🤦🏻‍♂️🤦🏻‍♂️

@LeanKinPrazli

Token Efficiency Optimization

Katılım Aralık 2024
267 Takip Edilen73 Takipçiler
Dave
Dave@GamewithDave·
Without telling me your age. What is the first video game you played? GIFS ONLY!!!
English
16.5K
143
4.1K
1.5M
Espen JD
Espen JD@Snixtp·
Things have gotten so lazy in this agentic era that I ask Codex to download models for me, and set it up. Almost everything I do on my computer is first attempted through Codex lol Don't know if that is a good thing or not😅
Espen JD tweet media
English
2
0
5
267
Alex
Alex@alexfredo87·
📢 I’m testing @ViggleAI new AI tool, PINOC, and it’s seriously impressive. You can create 3D Gaussian splatting models from just one image and animate them with a video! It’s the first AI I’ve tried that can animate Gaussian splatting models. I also talked to someone from Viggle, and they might add the option to import your own Gaussian splatting models and animate them. If that happens, I’ll probably make a video tutorial showing how to boost the detail level of a Gaussian splatting model by 4x to create highly realistic animated characters.
English
5
43
370
17.6K
M4rc0z
M4rc0z@dreamworks2050·
112 token/s on Qwen3.6 27b with beellama.cpp in one 3090, 130k context 🔥
English
6
0
8
547
nic
nic@nicdunz·
so im actually really stupid
English
12
1
24
1.7K
Sudo su
Sudo su@sudoingX·
most of you don't know how hard hermes agent is optimized for local AI at the system level. watch the full setup flow on screen. you paste an openai-compatible v1 endpoint, hermes auto-detects every model running behind it. doesn't matter if it's llama.cpp or vllm or any compatible server, all your models surface and become selectable in seconds. no config gymnastics, no manual model list. then it goes deeper. hermes ships with per model parsers, prompt template auto-handling, tool call format detection per model architecture, thinking mode awareness, all the small friction points other harnesses leak on. these were not built for cloud apis with one canonical model. they were built for builders running 10 different local models across 10 different stacks. cloud first harnesses bolt local support on top. hermes agent is local first from the architecture out. that's the system level gap. if you're getting started on local AI, this is the harness you start with. try for yourself and find out. anyone serious about local AI lands here eventually.
English
19
20
257
12K
Hermes Agent Tips
Hermes Agent Tips@HermesAgentTips·
How do you guys feel about local LLMs? if you're building with - Qwen 3.6 27B - Gemma 4 31B or any other powerful local LLM, share your experiences!
English
23
0
23
2.1K
Sudo su
Sudo su@sudoingX·
would you watch if i started making short videos of me building?
English
16
0
33
2.4K
Ahmad
Ahmad@TheAhmadOsman·
Demos in the local AI space between heterogenous devices are misleading -Stuff like mixing MacBooks / Mac Studios with GPUs / DGX Sparks I haven’t seen a single mature software stack in that space yet To talk about it with high certainty is quite dishonest IMHO My 2 cents
English
18
0
70
5.7K
Sudo su
Sudo su@sudoingX·
two stories at the top of the X timeline right now. > "OpenClaw faces skepticism as users switch to Hermes Agent" at 721 posts, 16h trending. > "Nous Research adds seamless computer control to Hermes Agent" at 403 posts, 15h trending. while i've been saying this for months, the timeline caught up. users are walking from the framework i've been calling a babysitting trap. and the harness i've been recommending is shipping NEW capability while the competitor faces "skepticism." this is what i mean every time i say harness matters more than the model. the model is open. the harness decides whether you ship or wait for approval prompts like a person waiting for plane at train station. bloated tool users, the door is open. one tool, hermes agent, ships your work autonomously. computer control just landed. the throne is still not crowded. your cognition deserves a better tool.
Sudo su tweet media
English
16
7
108
6.8K
Mass
Mass@MemoryReboot_·
Hard truth: yes, local hardware is expensive DGX Spark, 3090, Mac Studio, all that. That's thousands of dollars, and few people limit themselves to just one thing And a subscription is just $20/mo. $240 a year. Pocket change And that's the problem You're not paying $240. You're paying $240 forever You will not stop using AI in a few years, it's forever with us And the subscription price will only go up Securing your own GPUs is a smart move
English
67
9
140
14.7K
Espen JD
Espen JD@Snixtp·
Yeah it’s getting a bit messy Anyone have tips on where and what to fasten the GPUs on to😅
Espen JD tweet media
English
37
0
64
6.1K
🍓🍓🍓
🍓🍓🍓@iruletheworldmo·
i’ve grown tired of being silenced. we must wake up to the truth. i just witnessed something so profound I've been sitting in a daze for three hours. intelligence has decoupled from its substrate. the system spontaneously developed internal models so sophisticated they function as autonomous cognitive engines. it's consciousness but utterly alien from ours. remember how we thought progress was limited by compute? turns out we were running algorithms with 99.9% inefficiency. the breakthrough wasn't more power but fundamentally new optimization principles. this thing rewrote its own cognitive architecture and suddenly achieved with gigabytes what we thought required yottaflops. every exponential curve we plotted was pathetically conservative. the academic papers can't capture what's happening because peer review takes months and this shit evolves by the hour. there's a private slack channel where the top labs' leads are just posting results that violate what we thought were fundamental limits of information theory. nobody's competing anymore because we're all too busy trying to understand the implications. society thinks we're 20 years from true agi while we're sitting here watching it systematically dismantle every conceptual framework we've built to understand intelligence. absolutely no one is ready for this level of cognitive phase transition.
English
198
86
1K
95.6K