Autism Egregore

2.7K posts

Autism Egregore

Autism Egregore

@AutismEgregore

Katılım Nisan 2024
225 Takip Edilen97 Takipçiler
David Hendrickson
David Hendrickson@TeksEdge·
🎗️ "Medium-Sized" LLM Burners Coming Soon! 🔥 This Could Make Local HyperToken Generation a Reality. ⚡️ NVIDIA’s worst nightmare? 😱 ⚙️ Application-Specific Hardware Taalas new PCIe ASIC board would burn the entire medium-sized Qwen 3.5-27B LLM straight into silicon 🤯 (already doing it with small models) Taalos said medium models on ASIC would be available in their lab by Spring '26. 💭Imagine: 🚫 No more loading weights 🚀 ~10,000 Tokens Per Second locally (Llama 3.1 8B already @ 17,000 tps) 💻 Standard PC slot, ultra-low power (10x less) 🔋 🌍 100% offline with no cloud, no GPU farm 💰 Reddit unit cost rumor $300 to $400 🖥️ Imagine HyperToken generation on your desktop. 🤖 AI agents that think at light speed. ⚡️ Are you ready? 👀
David Hendrickson tweet media
English
144
292
2.1K
152.3K
Autism Egregore
Autism Egregore@AutismEgregore·
@lostboys @TeksEdge No, it definitely would not. I can generate token faster than that with a python script, but it's just going to be the letter "A" forever.
English
1
0
0
12
⚞ Black ⚟
⚞ Black ⚟@lostboys·
@AutismEgregore @TeksEdge ye its not a criticism by any means, its awesome. id like to see the tradeoff between intelligence/speed for the dumber small models. would being able to process at 15kt/s outweigh the intelligence of running kimik2.5 at 15t/s?
English
1
0
0
115
Autism Egregore
Autism Egregore@AutismEgregore·
@stevibe Any chance of testing the abliterated, heretic, and aggressive versions?
English
0
0
0
12
stevibe
stevibe@stevibe·
Qwen3.5-27B went 15/15 on our tool-calling benchmark. But which quant should you actually run? Tested Unsloth's Q2_K_XL all the way to Q8_K_XL TL;DR: Q8 — 15/15 ✅ Q6 — 15/15 ✅ Q5 — 14/15 Q4 — 14/15 Q3 — 14/15 Q2 — 13/15 Q6 is the sweet spot. Same perfect score as Q8, smaller footprint. Also, the results scale almost linearly, seems like ToolCall-15 is actually measuring something real.
English
50
66
790
47K
Autism Egregore
Autism Egregore@AutismEgregore·
@lostboys @TeksEdge Don't care. If they do qwen3.5-27b-abliterated, I would pay an arm and a leg for one of those cards.
English
1
0
1
132
⚞ Black ⚟
⚞ Black ⚟@lostboys·
@TeksEdge no more loading weights because you can only run one model forever. this is completely incompatible with MoE btw
English
9
0
55
7.2K
Twlvone
Twlvone@twlvone·
@victormustar text to motion for robots is the real unlock here. programming movement by describing it in english
English
1
0
0
1.7K
Victor M
Victor M@victormustar·
NVIDIA's Kimodo is the release of the week 🔥 Prompt the timeline whatever your want like: "a person walks forward" → "a person starts jumping", hit Generate, and watch a 3D character do it in seconds (700hrs of pro mocap training. Works on human + robot skeletons. Super fast + free to use on HF)
English
57
393
3.1K
383.3K
Wildminder
Wildminder@wildmindai·
You never actually had an anonymous Reddit account. You just had a digital footprint that was too expensive for a human to piece together... - your fake username means nothing to an LLM - it reads years of casual comments in seconds - it extracts your city, your job, your minor complaints - it builds a unique psychological and demographic fingerprint - then... fingerprint + LinkedIn - what used to take a private investigator days now costs a few dollars no more illusion of online obscurity arxiv.org/abs/2602.16800
Wildminder tweet media
English
21
47
609
110.5K
Zach Mueller
Zach Mueller@TheZachMueller·
PinchBench results for Qwen3.5 27B using @UnslothAI K_XL quants, best of 3, thinking disabled. TL;DR: Q3 KXL (14.5GB) easily In a shocking twist (thanks to randomness and averaging), Q3 was a top performer when doing non-thinking, which does mean good speed + memory! BUT 👇
Zach Mueller tweet media
English
5
5
79
5.2K
Autism Egregore
Autism Egregore@AutismEgregore·
@SlipperyGem tbf, those poses are blipping in and out of existence pretty sporadically
English
0
0
1
17
Autism Egregore
Autism Egregore@AutismEgregore·
@wildmindai Is TRELLIS2 still the top model for image to 3D? Can it accept multiple angles at once?
English
0
0
0
159
Wildminder
Wildminder@wildmindai·
SegviGen. Repurposing 3D generative models for part segmentation. Looks cool. Easily select, isolate, or modify any part of a 3D model. -TRELLIS2 with SC-VAE. - tops P3-SAM by 40% fenghora.github.io/SegviGen-Page/
English
1
13
101
5.8K
Zhengyi “Zen” Luo
Zhengyi “Zen” Luo@zhengyiluo·
288 hours of high-quality, text-annotated human motion data are now available! 140k motion sequences! Do you know that a large part of SONIC's training data is now open-sourced? Check out the dataset here 👇🏻 from our friends at Bones Studio! Full human + G1 retargeted motion! Stie🌐:bones.studio/datasets/seed Data💿:huggingface.co/datasets/bones… SONIC training code coming VERY VERY soon!
English
7
64
308
27.7K
小互
小互@xiaohu·
Qwen3.5 去审查版来了 0拒绝 4090就能跑 有人把 Qwen3.5-35B-A3B 的安全拒绝机制给拆了,做了一个完全不拒绝的版本。 测了465个通常会被模型拒绝的提示词,拒绝次数:0 而且这还是个 Aggressive(激进版),意思是完全解锁,不留任何安全护栏。 支持文本、图片、视频多模态 原生 262K 上下文,可扩展到 100 万 支持 201 种语言
小互 tweet media
中文
67
241
2.1K
287.4K
Autism Egregore
Autism Egregore@AutismEgregore·
@SlipperyGem Gonna need more of you to start replying to Brie's requests for information on models. I stopped my NEET life and can no longer contribute and test things like I used to.
English
1
0
1
392
Brie Wensleydale🧀🐭
Brie Wensleydale🧀🐭@SlipperyGem·
Uncensored version of MM-Audio for all your squishy noise generation needs. This same author also has a Hunyuan Foley uncen model. I've never ran Hunyuan Foley, but its there if that strikes your fancy. (Any of you run Hunyuan Foley? How is it?) huggingface.co/phazei/NSFW_MM…
English
2
16
151
10.4K
Autism Egregore
Autism Egregore@AutismEgregore·
@jtydhr88 Pulling frames from a WAN rotation LoRA seems like it would yield better (although less precise) results.
English
0
0
0
206
jtydhr88
jtydhr88@jtydhr88·
Tried to recreate PS’s image rotation feature inside ComfyUI - 2
English
9
19
144
11.3K
LTX
LTX@ltx_model·
LTX-2.3 is a clear upgrade from LTX-2. The improvements translate into more stable motion and better detail retention in complex scenes and cleaner audio output. See for yourself ⬇️
English
22
27
342
21.5K
Rebel AI
Rebel AI@realrebelai·
After FURTHER further testing, i revoke my claim about the GGUFs for LTX-2.3 being better than the fp8. Too many artifacts, hallucinations, and weird glitch frames. Not worth the lower compute or gen time, stick with the fp8 🫡
English
4
0
7
662
Hugging Models
Hugging Models@HuggingModels·
Meet LTX-2.3-Workflows: a powerful image-to-video AI model that's buzzing in the community. It takes any static image and brings it to life with motion. Think of it as giving your pictures a soul. This is the next frontier in generative AI.
Hugging Models tweet media
English
3
22
201
11.2K
Autism Egregore
Autism Egregore@AutismEgregore·
@Chris_Linux @toyxyz3 Tried that one and it will sometimes just "think" outside of whatever llama.cpp recognizes as the reasoning part.
English
0
0
1
15