bifkn

11.5K posts

bifkn banner
bifkn

bifkn

@therealbifkn

AI | gaming | fitness. Testing local models, memory, dashboards, and evals on my own messy performance data.

เข้าร่วม Mayıs 2022
1.6K กำลังติดตาม1.4K ผู้ติดตาม
ทวีตที่ปักหมุด
bifkn
bifkn@therealbifkn·
I want to build weird, useful AI stuff around the things I actually care about. local models agents memory games training health experiments. notes. dashboards. failures. receipts. The goal: turn tinkering into experiments, notes, dashboards, failures, and receipts. 🧪 this is my lab.
bifkn tweet media
English
2
0
2
83
bifkn
bifkn@therealbifkn·
@cwolferesearch In what domain? So far 27B handles all of my coding tasks significantly better than Gemma 4
English
0
0
0
82
Cameron R. Wolfe, Ph.D.
Cameron R. Wolfe, Ph.D.@cwolferesearch·
Gemma-4 has received more attention than prior generations of Gemma, but I still somehow feel like these models are so underrated. I don't think people fully realize how good Gemma-4 models are, especially for their size.
English
29
6
170
13.2K
tawer
tawer@tawer1O·
Comparison of Gemma 4 and Qwen 3.6 in the same coding task Same hardware, same prompt, comparable model size > gemma 4 31b: 27 tok/s, 3m 51s, 6,209 tokens, stronger game logic > qwen 3.6 27b: 32 tok/s, 18m 04s, 33,946 tokens, better visuals Gemma 4 with lower tok/s finished 14 minutes faster because it used 5.5x fewer tokens to reach a complete answer
English
32
39
462
62.6K
Chubby♨️
Chubby♨️@kimmonismus·
/1 Gemma 4 31B just crushed Qwen 3.6 27B in a local LLM gamedev contest inside @atomic_chat_hq (prompt is below) Device: MacBook Pro M5 Max, 64GB RAM Results: Qwen 3.6 27B: 32 tokens/sec · 18m 04s · 33,946 tokens Gemma 4 31B: 27 tokens/sec · 3m 51s · 6,209 tokens So what is more important: tokens per second, or the quality of the final answer? Qwen made a very long response and showed more creativity and visual style. But Gemma gave a shorter, clearer, and more logical answer in much less time. In this one-shot Pac-Man gamedev contest, Gemma 4 31B was the clear winner. Its game logic was stronger: click reactions were smoother, and it handled interactions with elements like walls, ghosts, and particle effects better. But this was only one test. Maybe Qwen 3.6 27B can show better results with better settings. Open the comments, try our prompt, and share your result below.
English
43
46
599
66.5K
bifkn
bifkn@therealbifkn·
@tszzl You can just change the power settings?
English
0
0
1
9
roon
roon@tszzl·
people are walking around with their laptops slightly ajar to keep their agents running
English
512
198
4.6K
691.7K
bifkn
bifkn@therealbifkn·
@LottoLabs Will this work for Spark and Macs too?
English
0
0
1
52
Lotto
Lotto@LottoLabs·
Bros localmaxxing is going to have Rentals It’s a area where you can expose your local rig to the site and people can test your system. You get paid out weekly by users, set your price in dollars/thousand tokens. If your rig is idle you may as well let a moot test drive it
English
23
3
91
5.1K
Hugging Models
Hugging Models@HuggingModels·
Ready for a model that pushes boundaries? Meet Qwen3.6-27B-AEON-Ultimate-Uncensored-BF16-mlx-2Bit, a text generation beast that processes both images and text. It's abliterated and uncensored, giving you raw, unfiltered AI power.
Hugging Models tweet media
English
7
12
183
11K
witcheer ☯︎
witcheer ☯︎@witcheer·
when running local AI in your machine, what security harnesses are you using?
English
10
0
10
1.7K
bifkn
bifkn@therealbifkn·
@leftcurvedev_ @OpenRouter Can't deny how insanely good 27B is, but also props to 397B, very pretty! We need 3.6 397B and 122B!!!
English
0
0
0
17
left curve dev
left curve dev@leftcurvedev_·
🥊 Time for a new fight Qwen3.5 397B A17B vs Qwen3.6 27B 🌸 "Cherry Blossom" (↓ prompt below) Using @OpenRouter for 397B Running 27B locally on a single RTX 5080 Wow! 🤯 I'm sure this one is going to divide opinions. 397B feels cinematic with the flare and shadows it added, while 27B is just beautiful; it has a completely different color palette and the leaves are falling from the branches as asked. It's really tough to decide which model is the best. I was about to call it a tie, but then I remembered that 397B needs 180GB VRAM to run locally 😅 Don't get me wrong the model is amazing, but right now the VRAM it requires doesn't feel justified. It's simply not 10× better than 27B. We'll have to dig deeper with more prompts to be sure, let's see what each one has in store for us. What do you think?
English
26
17
303
20.4K
bifkn
bifkn@therealbifkn·
@loktar00 From wife's point of view: "Please don't spend the rest of the night working on small demos with Qwen 3.6.... Please don't spend the rest of the night working on small demos with Qwen 3.6.... Please don't spend the rest of the night working on small demos with Qwen 3.6...."
English
0
0
0
16
bifkn รีทวีตแล้ว
Loktar 🇺🇸
Loktar 🇺🇸@loktar00·
I will not spend the rest of the night working on small demos with Qwen 3.6.... I will not spend the rest of the night working on small demos with Qwen 3.6... I will not spend the rest of the night working on small demos with Qwen 3.6...
English
16
1
64
2.4K
Austin Kennedy
Austin Kennedy@astnkennedy·
I'm 22 years old and Claude Code is deteriorating my brain. Every single day for the last 6 months I've had 6 to 8 Claude Code terminals open, waiting for a response just so I can hit 'enter' 75% of the time. And it's doing something to me. In convos with a couple of friends, it's been a point that's been brought up pretty frequently. None of us feel as sharp as we used to. I don't know if it's just us, or others in their 20s are feeling the same thing, but it's something I've been thinking about a lot. P.S. I know this is a problem with my reliability/usage of it, not Claude Code itself, but the effects are real nonetheless
English
1.3K
371
9.2K
2M
twissted
twissted@TwisstedToast·
@loktar00 Why does gpt 5.5 make you a black sphere
English
3
0
1
297
Loktar 🇺🇸
Loktar 🇺🇸@loktar00·
gpt 5.5 won this, but 27b wasn't far behind. Opus 4.7 not sure what happened there. Autonomous Infinite miner that gains upgrades as it collects gold and crystals.
English
9
2
116
13.1K
bifkn
bifkn@therealbifkn·
Fair enough! I'm in the same boat as you with the wife and kids, my wife literally just walked into my office and said, "I'm not digging the creepy eyes", "It actually looks really real, but it's weird". My daughter actually enjoys coming into my office to talk to me while I'm working with them on haha. I gotta say that after using them for a bit now, I do not think Apple should abandon this tech at all. It's very very good and there are tons of use-cases for them. I don't see us getting this experience via 'Smart Glasses' anytime soon. It's really next level imo.
English
0
0
0
38
Vadim Yuryev
Vadim Yuryev@VadimYuryev·
@therealbifkn @ADTCoach I know sorry lol I was messing with you there. I guess I should probably stop now. I can understand how some people get value out of it. I just think that from Apple’s perspective, it makes sense to stop making new versions of Vision Pro. M5 should be the last model, imo.
English
2
0
1
329
Vadim Yuryev
Vadim Yuryev@VadimYuryev·
Honestly having a hard time understanding what there is to do on Vision Pro every single day. You couldn’t pay me to use Vision Pro every day. Can someone please educate me? I’m being completely serious. No sarcasm at all. I just don’t understand it.
Jon Ander Orcera@JonOrcera

🙄😣I use it every single day from day one from the M2 chip to M5. You don’t know anything about that and only working for clickbait. 15 years project, shut up please…

English
47
2
79
24.6K