IRMC

123.1K posts

IRMC

IRMC

@IRMC16

I Rest My Case. No DM.

가입일 Aralık 2018
5.3K 팔로잉2.7K 팔로워
고정된 트윗
IRMC
IRMC@IRMC16·
@MarieWvL Horror has a face. And you must make a friend of horror. Horror and moral terror are your friends. If they are not, then they are enemies to be feared. They are truly to be feared. -Colonel Kurtz in Apocalypse Now- youtu.be/sodDYJRpQjY
YouTube video
YouTube
English
2
1
27
17.5K
IRMC
IRMC@IRMC16·
@EsPyrite Schitterend groepje. Niet groter dan een €2 muntstuk. Met een geweer zou ik er blij mee zijn. Het enige wat je hoeft te doen is je vizier erop afstellen en dan is het volgende groepje midden in de roos.
Nederlands
0
0
0
1
EsLe 🥀
EsLe 🥀@EsPyrite·
Het darten gaat goed 🤘
EsLe 🥀 tweet media
Nederlands
21
0
78
1.3K
IRMC 리트윗함
🇯🇵砂川 泉🎌
🇯🇵砂川 泉🎌@26ers_bp115·
ポルトガル🇵🇹リスボン 移民のムスリムがナンパを断られた腹いせにポルトガル人女性に暴行し意識不明にさせる。 自己中、暴力的、男尊女卑 イスラム教徒を日本に入れるな!
日本語
82
767
1.7K
15.8K
IRMC 리트윗함
SULLY
SULLY@SULLY10X·
A hunter finds a deer trapped in a fence, struggling to breathe. What he does next is incredible!
English
1.2K
5.3K
33.7K
916.4K
IRMC 리트윗함
Frans Nijhof
Frans Nijhof@frans_nijhof·
Pensioenbestuurder Ronald Wuijster (APG/ABP) verloor in 2022 bijna 97 miljard euro (96,9) op de beurs. Werknemersgeld. Daarom werd zijn jaarsalaris in 2023 verhoogd met 136.000 euro, van 808.221 euro naar 944.525 euro. Bonus voor vrinden onder elkaar, nietwaar?
Frans Nijhof tweet media
Nederlands
11
134
277
3K
IRMC 리트윗함
David Turver
David Turver@7Kiwi·
This is the man that helped create the Climate Change Act that has resulted in the UK closing its fertiliser plants. Now he's worried about a lack of fertiliser. These people should be in prison.
David Miliband@DMiliband

The window to avert a massive global hunger crisis is rapidly closing. Must-read from the @guardian on the food security timebomb that will go off if fertiliser cannot pass through the Strait of Hormuz: theguardian.com/world/2026/apr…

English
150
3.3K
14.2K
253.2K
Draneil Mifa
Draneil Mifa@DragonGroky·
@IRMC16 @sudoingX i tried you config but it is at about 20 tokens / sec. It is because you have a 4090 and me a 3090 ?
Draneil Mifa tweet mediaDraneil Mifa tweet media
English
1
0
0
8
Sudo su
Sudo su@sudoingX·
people keep asking me what model to run on a single 3090. it's not even close. Qwen 3.5 27B dense Q4_K_M. undisputed.
kumikumi (Ankkala)@ankkala

@sudoingX to be clear, which model / quantization did you run on the 3090?

English
49
17
551
33.8K
IRMC
IRMC@IRMC16·
@DragonGroky @sudoingX I was waiting for Gemma-4-31B with TurboQuant cache. To swap out Qwen3.5-27B. My expectation was: Bigger is better. Apparently not for my use case. x.com/TeksEdge/statu… x.com/leftcurvedev_/…
left curve dev@leftcurvedev_

everyone is wondering the same thing qwen3.5 27b or gemma 4 31b? new benchmarks from @ArtificialAnlys are out, let's dig the numbers: 💻 coding index > gemma wins surprisingly it was the best and scored 42 — it managed to handle more coding tasks successfully than qwen, very interesting! 👀 🤖 agentic index > qwen destroys gemma when it comes to tool calls, multi-step reasoning, and autonomous task execution, there's no need to talk about it — qwen is the absolute winner in the category, scoring 55 (!) vs 41 for gemma 🤯 👑 the winner > qwen3.5 27b stays undefeated gemma could have been an amazing contender but in agentic tasks it's just too much behind, compared to what qwen has to offer — makes no sense to use it if your tasks are heavy and need reasoning what are your thoughts?

English
1
0
1
73
IRMC
IRMC@IRMC16·
@DragonGroky @sudoingX - "--cache-type-k" - "q4_0" - "--cache-type-v" - "q4_0" - "--host" - "0.0.0.0" - "--port" - "8000" - "--alias" - "Qwen3.5-27B"
2
0
1
18
IRMC
IRMC@IRMC16·
@annexum_huib Ik heb nieuws voor je. Gaan ze niet doen. Kost de leidinggevenden hun baantje. Of niet maar dan nog krijg je weerstand want lager budget, minder rapporterend personeel, dus minder belangrijk binnen de club. Zeker als het functiehuis, dus salaris gerelateerd is aan die criteria.
Nederlands
0
0
0
67
IRMC 리트윗함
Huib Boissevain 🇪🇺
Huib Boissevain 🇪🇺@annexum_huib·
In de economische literatuur (Rahn‑curve) ligt de optimale overheidsgrootte rond 15–25% van het bbp. Nederland zit inmiddels op zo’n 45%. Theoretisch betekent dat: we zitten in de zone waar extra overheid geen groei meer oplevert, maar juist afremt – via hoge lasten en verdringing van private investeringen. De discussie zou dus niet alleen moeten gaan over wáár de overheid geld aan uitgeeft, maar ook over hóe groot ze überhaupt nog moet zijn.
Huib Boissevain 🇪🇺 tweet media
Nederlands
67
553
1.3K
46.1K
Sudo su
Sudo su@sudoingX·
i am not being able to recover from this one. 27B dense on a $900 RTX 3090 outperforming 120B MoE on a $70K production node with 2x H200 NVL at full precision. this is not easy to process. it changes the way we pick models for any task. if you're an AI startup running 120B MoE inference for agent workflows and a 27B dense with all parameters active on a single consumer GPU does it better, your compute bill might be solving the wrong problem. i am writing the full deep dive article to document everything here and share with you all so we can reproduce and verify. the reproduction test is coming first. same 3090, same 27B dense Q4, same prompt, same harness. if it holds twice it's not a fluke. it's architecture. and based on what the VRAM poll is showing me right now, most of you are sitting on the exact hardware that already won this fight. article drops this week.
Sudo su@sudoingX

i am still in shock that Qwen 3.5 27B dense on a single RTX 3090, a $900 GPU, one shotted a game challenge that 120B MoE at full precision on $70K+ production hardware could not. this is leading me to doubt if it was a fluke. so i am going to reproduce it. i will test 27B dense Q4 on my single 3090 again paired with Hermes Agent and have it reproduce the results. after that i will test the same dense 27B but unquantized because if Q4 can one shot something that 120B full precision cannot then i wonder what dense 27B unquantized would do. dense models with all parameters active on every token might matter more than total parameter count for agent coding. if this reproduces it changes how i think about what hardware you actually need. this is not letting me sleep well since yesterday. i will report back.

English
82
101
1.4K
142.5K
IRMC
IRMC@IRMC16·
@_blackcat1402 @sudoingX Yes, on llama.cpp. Had to compile the latest version. But I left this model already. It's not stable in my environment. I returned to Qwen3.5-7B.
English
0
0
0
11
IRMC
IRMC@IRMC16·
@GlobalIJournal The S-500 is a defensive anti-aircraft/rocket system. What's the madness?
English
0
0
0
12
Global Insight Journal
Global Insight Journal@GlobalIJournal·
🇮🇱 Israeli media: "It appears that Russia has supplied Iran with the S-500 system. This is madness."
Global Insight Journal tweet media
English
1.7K
3.7K
26.9K
1.7M
IRMC 리트윗함
Kevin W.
Kevin W.@Brink_Thinker·
What the hell do they teach Japanese kids? It's going viral!
English
2.9K
6.8K
75.3K
5.6M
Vincent G. de Vlugt
Vincent G. de Vlugt@Vincentgdevlugt·
Mijn diëtiste zegt dat ik boter moet vervangen door margarine. Heb dat nu een paar weken geprobeerd. Maar echt beste vrienden: vind het niet te hachelen. Weet niet of ze op X zit, maar ga mijn diëtiste teleurstellen. Er is gewoon niks lekkerders dan boter.
Vincent G. de Vlugt tweet media
Nederlands
487
15
650
36.8K
IRMC
IRMC@IRMC16·
@DragonGroky @sudoingX For me 196608 is a sweet spot. Larger amounts get flushed into RAM, dropping the performance.
English
1
0
0
11
Draneil Mifa
Draneil Mifa@DragonGroky·
@IRMC16 @sudoingX How many token /sec do you have with your setup ? I was using Q4 K_M and 200k context length but switched to Q5 with turbo quant and reduced content length
English
2
0
1
26