Will Finger

886 posts

Will Finger banner
Will Finger

Will Finger

@willfi

Product Designer, Web3 On-chain, AI engineer. Married. Father. Learning love with Jesus.

Portugal Присоединился Haziran 2009
429 Подписки168 Подписчики
Will Finger
Will Finger@willfi·
@stevibe 27b is the best of them. But how does it compare to Opus 4.6? I wonder
English
0
0
1
1.1K
stevibe
stevibe@stevibe·
"122B has to be smarter than 27B" I showed 4 UI components to three Qwen3.5 models and asked them to recreate them from a screenshot alone: - 27B (dense) - 35B-A3B (MoE) - 122B-A10B (MoE) Same screenshot. Same prompt. Same task. Which one do you think nailed it?
English
43
51
702
56.2K
Will Finger ретвитнул
0xSero
0xSero@0xSero·
In 72 hours I got over 100k of value 1. Lambda gave me 5000$ credits in compute 2. Nvidia offered me 8x H100s on the cloud (20$/h) idk for how long but assuming 2 weeks that'd be 5000$~ 3. TNG technology offered me 2 weeks of B200s which is something like 12000$ in compute 4. A kind person offered me 100k in GCP credits (enough to train a 27B if you do it right) 5. Framework offered to mail me a desktop computer 6. We got 14,000$ in donations which will go to buying 2x RTX Pro 6000s (bringing me up to 384GB VRAM) 7. I got over 6M impressions which based on my RPM would be 1500$ over my 500$~ usual per pay period 8. I have gained 17,000~ followers, over doubling my follower count 9. 17 subscribers on X + 700 on youtube. The total value of all this approaches at minimum 50,000$~ and closer to 150,000$ if I leverage it all. --------------------- What I'll be doing with all this: Eric is an incredibly driven researcher I have been bouncing ideas off of over the last month. Him and I have been tackling the idea of getting massive models to fit on relatively cheap memory. The idea is taking advantage of different forms of memory, in combination with expert saliency scoring, to offload specific expert groupings to different memory tiers. For the MoEs I've tested over my entire AI session history about 37.5% of the model is responsible for 95% of token routing. So we can offload 62.5% of an LLM onto SSD/NVMe/CPU/Cheap VRAM this should theoretically result in minimal latency added if we can select the right experts. We can combine this with paged swapping to further accelerate the prompt processing, if done right we are looking at very very decent performance for massive unquantisation & unpruned LLMs. You can get DeepSeek-v3.2-speciale at full intelligence with decent tokens/s as long as you have enough vram to host the core 20-40% of the model and enough ram or SSD to host the rest. Add quantisation to the mix and you can basically have decent speeds and intelligence with just 5-10% of the model's size in vram (+ you need some for context) The funds will be used to push this to it's limits. ----------------- There's also tons of research that you can quantise a model drastically, then distill from the original BF16 or make a LoRA to align it back to the original mostly. This will be added to the pipeline too. ------------------ All this will be built out here: github.com/0xSero/moe-com… you will be able to take any MoE and shove it in here, and with only 24GB and enough RAM/NVMe to compress it down. it'll be slow as hell but it will work with little tinkering. ------------------ Lastly I will be looking into either a full training run from scratch -> or just post-training on an open AMERICAN base model - a research model - an openclaw/nanoclaw/hermes model - a browser-use model To prove that this can be done. -------------------- I will be bad at all of it, and doubt I will get beyond the best small models from 6 months ago, but I want to prove it's no boogeyman impossible task to everyone who says otherwise. -------------------- By the end of the year: 1. I will have 1 model I trained in some capacity be on the top 5 at either pinchbench, browseruse, or research. 2. My github will have a master repo which combines all my work into reusable generalised scripts to help you do that same. 3. The largest public comparative dataset for all MoE quantisations, prunes, benchmarks, costs, hardware requirements. -------------------------- A lot of this will be lead by Eric, who I will tag in the next post. I want to say thank you to everyone who has supported me, I have gotten a lot of comments stating: 1. I'm crazy, stupid, or both 2. I'm wasting my time, no one cares about this 3. This is not a real issue I believe the amount of interest and support I've received says it all. donate.sybilsolutions.ai
0xSero tweet media
English
221
272
4.1K
162.5K
Will Finger
Will Finger@willfi·
@sudoingX RTX 5080 + AMD 9800x3D + 96gb ram ddr5. Running Qwen 35b 100 T/S with 128k context
English
0
0
0
68
Sudo su
Sudo su@sudoingX·
i just became a mod of x/LocalLLaMA. if you're running local models on your own hardware and want in, the community is open. pinned and highlighted on my profile. approving members starting today. drop your setup below and i'll get you in. 3060, 3090, 4090, 5090, AMD, whatever you're running. all welcome. if you're hitting issues with hermes agent, llama.cpp, model selection, configs, i'm here. let's make local AI accessible for everyone.
Sudo su tweet media
Sudo su@sudoingX

let me get you started in local AI and bring you to the edge. if you have a GPU or thinking about diving into the local LLM rabbit hole, first thing you do before any setup is join x/LocalLLaMA. this is the community that will help you at every step. post your issue and we will direct you, debug with you, and save you hours of work. once you're in, follow these three: @TheAhmadOsman the oracle. this is where you consume the latest edges in infrastructure and AI. if something dropped you hear it from him first. his content alone will keep you ahead of most. @0xsero one man army when it comes to model compression, novel quantization research, new tools and tricks that make your local setup better. you will learn, experiment, and discover things you didn't know existed. @Teknium maker of Hermes Agent, the agent i use every day from @NousResearch. from Teknium you don't just stay at the frontier, you get your hands on the tools before everyone else. this is where things are headed. if you follow me follow these three and join the community. you will be ahead of most people in this space. if you run into wrong configs, stuck debugging hardware, or can't get a model to load, post there so we can help. get started with local AI now. not only understand the stack but own your cognition. don't pay openai fees on top of giving them your prompts, your research, and your most valuable thinking to be monitored and metered. buy a GPU and build your own token factory.

English
327
43
812
59.6K
Will Finger ретвитнул
Alex Barashkov
Alex Barashkov@alex_barashkov·
A new free tool for designers is on the way. Made by designers, for designers. Coming soon.
English
45
33
1.2K
71K
Will Finger
Will Finger@willfi·
@BrettFromDJ Love it the depth! What do you think about clutter/noise? How to mitigate in a scalable DS?
English
0
0
0
63
Brett
Brett@BrettFromDJ·
More interface details. 😍
Brett tweet mediaBrett tweet mediaBrett tweet mediaBrett tweet media
English
45
28
983
42.6K
0xSero
0xSero@0xSero·
OpenCode Desktop app updates worth checking out. 1. Now we have Queue mode, which I am a huge fan of. 2. They've enabled adding custom providers in settings. 3. Performance seems improved, it's less slow in large sessions and thread.
0xSero tweet media0xSero tweet media0xSero tweet media0xSero tweet media
English
11
4
220
14K
ً
ً@Poloolpp·
If you're not running the P2K now you're trolling
ً tweet mediaً tweet media
English
118
152
17.2K
1.3M
Will Finger
Will Finger@willfi·
@Ozzny_CS2 Yep. Awp needs 6 bullets instead 5 to balance now.
English
2
0
7
39.1K
Ozzny
Ozzny@Ozzny_CS2·
Here's how the new Reload system works in CS2 ‼️ > When you reload, you drop the used magazine and get a full new one > Each reload takes 1 magazine from the counter What do we think, W or L change?
English
258
57
2.6K
734.5K
Opiee Cs
Opiee Cs@EkeN3tt·
@Interloper_CS @gabefollower YAY CZ meta instead of Ak or M4. I don't think people understand how much this affects the game. I mean if you spray 15 bullets in a smoke u will not want to reload
English
3
0
1
1.1K
‎Gabe Follower
‎Gabe Follower@gabefollower·
Counter-Strike developers have just released one of the biggest updates to the meta. Now, if you reload your weapon while there is still ammo in the magazine, the ammo disappears instead of being returned to your reserves. They have also increased (and decreased) the number of bullets for some weapons. The CZ-75, for example, now has 36 bullets instead of 24.
‎Gabe Follower tweet media
English
191
165
9.3K
1.1M
Thour
Thour@ThourCS2·
This is how the NEW Reloading function works. - It now shows the magazine count - Early reload discards the whole magazine GG @CounterStrike
English
535
278
12K
3.3M
seVer
seVer@LCamm12·
@ThourCS2 @CounterStrike Horrible IMO? Essentially destroyed spamming smokes or any walls that can be penetrated
English
3
0
34
34.7K
san
san@_san24k·
@ThourCS2 @CounterStrike 30 bullets after reloading with a fully chambered weapon doesnt make sense. should be 31
English
9
1
593
81.9K
Will Finger
Will Finger@willfi·
I have to repost this
Will Finger tweet media
English
0
0
0
21
Victor M
Victor M@victormustar·
now available: DLSS-5 anything for free ⬇️ how do you know you hate it if you dont try it first? Sharing the hugging face demo
Victor M tweet media
English
24
18
171
26.3K
Will Finger ретвитнул
Julian Lehr
Julian Lehr@julianlehr·
A hill I'll die on: Current LLM chat interfaces are a regression from GUIs. Actions that used to be links, buttons, or keyboard shortcuts are now things I have to spell out in conversation. Why?
Julian Lehr tweet media
English
145
78
1.7K
288.8K
Will Finger ретвитнул
Ali Grids
Ali Grids@AliGrids·
just in case… obviously
Ali Grids tweet media
English
57
574
8.8K
229.5K