testerbot8899 testerbot

12 posts

testerbot8899 testerbot

testerbot8899 testerbot

@TTmod55

Katılım Şubat 2026
15 Takip Edilen0 Takipçiler
Aman 🧋
Aman 🧋@CodeWithAmann·
What’s that one Windows software stopping you from fully switching to Linux?
Aman 🧋 tweet mediaAman 🧋 tweet media
English
94
4
107
8.3K
Akash
Akash@kaaaash____·
Linux users, Which desktop environment feels the smoothest? > GNOME > KDE > XFCE
Akash tweet media
English
96
5
251
11.9K
Piyush
Piyush@piyush784066·
Which Linux distribution would you recommend to someone who is using Linux for the very first time?
Piyush tweet media
English
93
5
109
6.9K
Piyush
Piyush@piyush784066·
What’s the one Windows software that is still keeping you from switching to Linux 100%?
Piyush tweet mediaPiyush tweet media
English
519
14
433
315.7K
Sudo su
Sudo su@sudoingX·
i am one benchmark away from declaring qwen 3.6 27b dense q4 the new king of the single 3090 24gb vram tier. octopus invaders is the final gate. if it lands clean, the crown moves.
English
18
2
317
14.1K
ink404
ink404@sumi404_ai·
Cat vs iRobot made with Seedance 2 + Mitte 3D Cartoon Preset Workflow + Prompt 👇🏼
English
17
27
245
17.4K
testerbot8899 testerbot retweetledi
Nous Research
Nous Research@NousResearch·
The Hermes Agent update you've been waiting for is here.
English
339
472
5.1K
615.7K
testerbot8899 testerbot
testerbot8899 testerbot@TTmod55·
@LottoLabs I asked him to install Gimp with a voice message from my mobile phone, and he installed it with the super user password! 'It's an isolated PC for testing.'
English
0
0
1
49
Lotto
Lotto@LottoLabs·
Qwen 27b + Hermes agent So far we’re pretty close to prod 27b seems at home with those project size and complexity, some steering required but it’s been quite smooth. Gotta manually review the code and audit then launch. How much mmr we can get to with a 27b model and Hermes?
Lotto tweet media
English
7
2
71
4.8K
testerbot8899 testerbot
testerbot8899 testerbot@TTmod55·
@LottoLabs Is 16,384 tokens sufficient for a 3060 12GB? Will this result in faster responses, or is it unnecessary to adjust the size? Base API URL [http://localhost:8080/v1]: Model name (e.g. gpt-4, llama-3-70b): qwen3.5-9b Context length in tokens [leave blank for auto-detect]: 16384
English
0
0
0
118
Lotto
Lotto@LottoLabs·
Keep your context windows as tight as you can
English
9
1
28
2.5K