Mateusz Mirkowski

2.6K posts

Mateusz Mirkowski banner
Mateusz Mirkowski

Mateusz Mirkowski

@llmdevguy

Autonomous agents, agentic engineering Building & testing agentic systems Exploring local LLMs

Remote work evangelist Sumali Mart 2013
131 Sinusundan504 Mga Tagasunod
Naka-pin na Tweet
Mateusz Mirkowski
Mateusz Mirkowski@llmdevguy·
⭐️These plans are still the best. Buy them now while they’re still this cheap. They will rise like GLM plans!!! Today I coded for 3 hours, constant refactoring, code reviews etc. Just 2% weekly usage. 2%! 45000 requests per week. The quality is really good, at least like Sonnet 4.5. Very fast. Also one of 3 best models for OpenClaw or Hermes. Mark my words. This is the last time you’ll see prices this low.
Mateusz Mirkowski tweet media
Mateusz Mirkowski@llmdevguy

OK that was fast.. GLM 5.1 was the best model in terms of quality and price. For just few days.🙈 For this price it's still good option, but instead of 72$ I would rather pay 100$ for codex. More reliable models. For GLM go for OpenCode GO for 5 usd. 4400 requests per month is not bad to play with it. It's slow, but works. If you like it go with lite. King of value stays with MiniMax 2.7.

English
34
7
281
58.8K
Mateusz Mirkowski
Mateusz Mirkowski@llmdevguy·
@MatthewBerman @NVIDIA_AI_PC Future is in local, small models. It's a myth you need expensive hardware. Useful models like Qwen 9b runs on medicore laptops or macs mini. :) For better results go with Qwen 3.5 27b or Gemma 4 31b.
English
0
0
0
5
Matthew Berman
Matthew Berman@MatthewBerman·
I rebuilt much of my OpenClaw stack to run on local models. Getting this right is harder than it looks. I partnered with @NVIDIA_AI_PC to show you exactly how my hybrid local/hosted architecture works:
English
17
17
170
34.2K
Yam Peleg
Yam Peleg@Yampeleg·
If you want to feel how much Opus got degraded go try sonnet for a sec
English
4
0
37
3.4K
David Ondrej
David Ondrej@DavidOndrej1·
> open Hermes Agent > switch to Opus 4.6 Fast > restart gateway your agent just got a lot more powerful
David Ondrej tweet media
English
8
0
31
2K
Serg Dort 🇺🇦
Serg Dort 🇺🇦@SergDort·
Been using Pi for few days now. And tbh I don’t see a reason to comeback to Claude or Codex. The minimalistic setup is so nice. And so extensible that you can build any workflow imaginable. Here are the extensions I use. Most are built by @nicopreme lol. The guy is a machine! github.com/stars/sergdort…
English
5
4
50
2.4K
Harrison Kinsley
Harrison Kinsley@Sentdex·
Some stats when running M2.7 locally at various quants and machine configs. Based on M2.5 research, the Q4_K_XL is probably best performer @ 4bit for this particular model, seems worth the trade in t/s. I tried various configs to get the GPU+CPU offload to work faster.
Harrison Kinsley tweet media
Unsloth AI@UnslothAI

MiniMax 2.7 can now be run locally!🔥 MiniMax-M2.7 is a new 230B parameter open model with SOTA on SWE-Pro and Terminal Bench 2. Run the Dynamic 4-bit MoE model on 128GB Mac or RAM/VRAM setups. Guide: unsloth.ai/docs/models/mi… GGUF: huggingface.co/unsloth/MiniMa…

English
4
0
11
2.9K
BridgeMind
BridgeMind@bridgemindai·
Kimi K2.6 Code is about to drop. Beta program closed. Public rollout imminent. Another coding model entering the ring against Claude Opus 4.6 and GPT 5.4 The competition keeps coming. I will be testing this as soon as it drops. Stay tuned.
BridgeMind tweet media
English
13
4
74
2.3K
Siya
Siya@siyagule·
@llmdevguy Pretty good. Currently have free 2 week promotion on Hermes via Nous if you've got that harness.
Siya tweet media
English
1
0
1
30
lnex
lnex@aiey0002·
@llmdevguy I have tried with Tampermonkey script, it also failed
English
1
0
1
47
NVIDIA AI PC
NVIDIA AI PC@NVIDIA_AI_PC·
What local model are you running the most right now?
English
38
0
18
1.8K
lnex
lnex@aiey0002·
@llmdevguy You have nearly no chance to purchase the plan. It says it will have new quota at 10 a.m. UTC+8, I tried for several days, and I even can't load the page. It's nearly impossible to purchase the plan on chinese website, many people are purchasing global plan in chinese community.
English
1
0
1
385
Andras Kindler
Andras Kindler@andraskindler·
What a day to be a Hungarian! I have high hopes for the next few years to come. 🇭🇺 Now back to building.
English
1
0
7
67
Eric ⚡️ Building...
Yooo @nikitabier start sending those checks 👋🏻 WE DID IT CHAT 👋🏻 Lets gooo Build In Public guys trust! Reply with any questions you have 👇🏻
Eric ⚡️ Building... tweet media
English
2
0
6
229
Mateusz Mirkowski
Mateusz Mirkowski@llmdevguy·
@LignoL23 Yup it's good for light usage. 70000 requests for M2.7 is good, but only 4300 for GLM is low.
English
0
0
1
43
voe.lo
voe.lo@LignoL23·
@llmdevguy Just looked in their docs. Overall opencode Go is still really good. Having glm, Kimi, Mimo, Minimax available for $10 per month is awesome. Only Qwen is missing.
English
1
0
1
58
Ivan Fioravanti ᯅ
Ivan Fioravanti ᯅ@ivanfioravanti·
Sometimes GLM-5.1 is slow because it keeps thinking like a crazy model, look at these and have some fun! 😂 "I think I have a serious case of analysis paralysis." "OK, truly final answer. Writing it now. Done. Over. Out. The end. Goodbye. Au revoir. Sayonara. Adios. Ciao. Auf Wiedersehen. До свидания. 再见. 안녕히 가세요. さようなら. CHIUSO. TERMINADO. 完毕. 끝. DONE DONE DONE." "WRITING THE RESPONSE NOW. FOR REAL. THIS TIME. I MEAN IT. NO MORE DELIBERATION. JUST WRITING. OUTPUTTING. DONE. SENT. THE END." "ALRIGHT. I'M GOING TO STOP THE MADNESS HERE AND JUST WRITE THE RESPONSE. MY ANALYSIS IS COMPLETE AND MY ANSWER IS READY. TIME TO OUTPUT IT."
Ivan Fioravanti ᯅ tweet mediaIvan Fioravanti ᯅ tweet mediaIvan Fioravanti ᯅ tweet media
English
5
0
14
1.2K
no name
no name@pepeholding·
@llmdevguy Si je l’ai eu a l’ancien prix, ça reste à vie ? C’est du vol pour 10$ 🤫
Français
1
0
0
34
Mateusz Mirkowski
Mateusz Mirkowski@llmdevguy·
OK that was fast.. GLM 5.1 was the best model in terms of quality and price. For just few days.🙈 For this price it's still good option, but instead of 72$ I would rather pay 100$ for codex. More reliable models. For GLM go for OpenCode GO for 5 usd. 4400 requests per month is not bad to play with it. It's slow, but works. If you like it go with lite. King of value stays with MiniMax 2.7.
Mateusz Mirkowski tweet media
English
43
6
235
178.1K
Mateusz Mirkowski
Mateusz Mirkowski@llmdevguy·
Ok, bye bye CodexBar. This is unusable. MiniMax coding plan doesn't work, Codex doesnt work, OpenCode go doesn't work. Is anything working there or am I so dumb, I can't configure it properly?
Mateusz Mirkowski tweet media
English
2
0
0
388