Mateusz Mirkowski

2.6K posts

Mateusz Mirkowski banner
Mateusz Mirkowski

Mateusz Mirkowski

@llmdevguy

Autonomous agents, agentic engineering Building & testing agentic systems Exploring local LLMs

Remote work evangelist Bergabung Mart 2013
132 Mengikuti511 Pengikut
Tweet Disematkan
Mateusz Mirkowski
Mateusz Mirkowski@llmdevguy·
⭐️These plans are still the best. Buy them now while they’re still this cheap. They will rise like GLM plans!!! Today I coded for 3 hours, constant refactoring, code reviews etc. Just 2% weekly usage. 2%! 45000 requests per week. The quality is really good, at least like Sonnet 4.5. Very fast. Also one of 3 best models for OpenClaw or Hermes. Mark my words. This is the last time you’ll see prices this low.
Mateusz Mirkowski tweet media
Mateusz Mirkowski@llmdevguy

OK that was fast.. GLM 5.1 was the best model in terms of quality and price. For just few days.🙈 For this price it's still good option, but instead of 72$ I would rather pay 100$ for codex. More reliable models. For GLM go for OpenCode GO for 5 usd. 4400 requests per month is not bad to play with it. It's slow, but works. If you like it go with lite. King of value stays with MiniMax 2.7.

English
34
7
281
58.9K
k.~ 🚀
k.~ 🚀@_atl3·
@llmdevguy Im gonna be charged new price soon 🙈 ?
k.~ 🚀 tweet media
English
1
0
1
19
OLLIE
OLLIE@ollies0x·
@MemoryReboot_ @llmdevguy @TheAhmadOsman In Australia they're $1500 (AUD) and being price gouged (some listed at $2500). An RTX 5090 is $6,000. You can pay for ollama cloud for 200 years for the price of one RTX 5090. 100% agree local is the future but in AU cost vs efficiency isn't there yet, tech dev. is coming.
English
2
0
2
14
Ahmad
Ahmad@TheAhmadOsman·
if you don't have GPUs already then you're kinda late to the game anon
English
28
2
91
4.9K
0xSero
0xSero@0xSero·
I got an offer to get paid for a genuine review of a new potential model, I took it because they said I could essentially say whatever I want. I like people that want real feedback
English
10
1
89
2K
Mass
Mass@MemoryReboot_·
@TheAhmadOsman In Ukraine where I am from it still at 750$ 😂
English
1
0
2
156
Mateusz Mirkowski
Mateusz Mirkowski@llmdevguy·
Someone overdose vibe coding at X. 😅
Mateusz Mirkowski tweet media
English
0
0
2
41
Mike Bradley
Mike Bradley@The_Only_Signal·
@llmdevguy Haha I wish! Best I can do is beers on me if you come to Philly 🍻
English
1
0
1
12
Mike Bradley
Mike Bradley@The_Only_Signal·
2x RTX PRO 6000 tower incoming…
Mike Bradley tweet media
English
4
0
5
73
Mateusz Mirkowski
Mateusz Mirkowski@llmdevguy·
@robinebers I want to know that too. I heard speed is the same between free and pro. If it's true it means it's slow. Try free plan first, you can test GLM 5.1 or M2.7.
English
0
0
0
266
Robin Ebers | AI Coach for Founders
has anyone tried limits on ollama cloud? especially interested in: - actual limits on Pro + Max - tok/s for the big models
Robin Ebers | AI Coach for Founders tweet media
English
23
2
108
12.4K
Mateusz Mirkowski
Mateusz Mirkowski@llmdevguy·
That was crazy Sunday. Thank you so much for replies, likes and reshares. I hope some of the new 50 followers will engage with my posts. 🥰 Now, back to getting 30-40 impressions on regular posts. 😂
Mateusz Mirkowski tweet media
English
2
0
4
134
Eric ⚡️ Building...
@llmdevguy @nikitabier Thats when everything took off for me honestly! I had 400 followers then. I just learned the algo and picked up on what people want to see / what they actually want to hear.
Eric ⚡️ Building... tweet media
English
1
0
2
7
Eric ⚡️ Building...
Yooo @nikitabier start sending those checks 👋🏻 WE DID IT CHAT 👋🏻 Lets gooo Build In Public guys trust! Reply with any questions you have 👇🏻
Eric ⚡️ Building... tweet media
English
3
0
7
339
Mateusz Mirkowski
Mateusz Mirkowski@llmdevguy·
@MatthewBerman @NVIDIA_AI_PC Future is in local, small models. It's a myth you need expensive hardware. Useful models like Qwen 9b runs on medicore laptops or macs mini. :) For better results go with Qwen 3.5 27b or Gemma 4 31b.
English
0
0
0
67
Matthew Berman
Matthew Berman@MatthewBerman·
I rebuilt much of my OpenClaw stack to run on local models. Getting this right is harder than it looks. I partnered with @NVIDIA_AI_PC to show you exactly how my hybrid local/hosted architecture works:
English
20
23
288
56.4K
Yam Peleg
Yam Peleg@Yampeleg·
If you want to feel how much Opus got degraded go try sonnet for a sec
English
5
0
56
4.9K