Marcos V

11.9K posts

Marcos V banner
Marcos V

Marcos V

@TheRealMarcosV

professional beta tester

Katılım Ağustos 2010
353 Takip Edilen299 Takipçiler
Sabitlenmiş Tweet
Marcos V
Marcos V@TheRealMarcosV·
Couple days with the Solana seeker it's been awesome!! @solanamobile
Marcos V tweet media
English
51
17
274
26.4K
Marcos V
Marcos V@TheRealMarcosV·
Got Gemma 4 working with full context on multi-GPU.
GIF
English
0
0
0
9
Boris Cherny
Boris Cherny@bcherny·
Starting tomorrow at 12pm PT, Claude subscriptions will no longer cover usage on third-party tools like OpenClaw. You can still use these tools with your Claude login via extra usage bundles (now available at a discount), or with a Claude API key.
English
830
336
4.2K
1.2M
Marcos V
Marcos V@TheRealMarcosV·
Anthropic throwing me a bone for complaining on X
Marcos V tweet media
English
0
0
0
1
Marcos V
Marcos V@TheRealMarcosV·
@lmstudio @GoogleDeepMind having trouble loading bigger context on any of these models -multi gpu rig 32gb vram any ideas? runtime has been updated btw.
English
0
0
0
42
dax
dax@thdxr·
what if we gave you unlimited tokens for free and we also paid you
English
707
33
3.6K
239.6K
Ettore Di Giacinto
Ettore Di Giacinto@mudler_it·
APEX + TurboQuant from @no_stp_on_snek are landing in @LocalAI_API - Q8_0 quality at half the size and significantly faster inference. To benchmark bigger models we need GPU access. If you can help, DM me 🙏 What model do you want quantized with APEX next? 👇
English
11
4
33
2.8K
Marcos V
Marcos V@TheRealMarcosV·
i cannot keep up, everyday its a new model.
Ettore Di Giacinto@mudler_it

I've just released APEX (Adaptive Precision for EXpert Models): a novel MoE quantization technique that outperforms @UnslothAI Dynamic 2.0 on accuracy while being 2x smaller for MoE architectures. Benchmarked on Qwen3.5-35B-A3B, but the method applies to any MoE model. Half the size of Q8. Perplexity comparable to F16. Works with stock @ggml_org's llama.cpp. Open source (of course!), with ❤️ from the @LocalAI_API team. 👇Links to the model, repository and benchmarks below! (+ Bonus TurboQuant benchmarks with @no_stp_on_snek's TQ+! )

English
0
0
1
30
Marcos V
Marcos V@TheRealMarcosV·
@ashen_one Codex loves to hand hold and make you waste 5-6 prompts before getting what you want done opus just goes yolo
English
1
0
1
18
ashen
ashen@ashen_one·
I think Codex is amazing but in my experience, for what I use it for, Claude has always been the most effective I use it for vibe coding and Open Claw more than anything. A lot of the stuff that I require my Open Claw to do is like free range stuff that he uses a lot of API calls for and browser access for During the week that I tested Codex on one of my OpenClaws, I ran into a lot of problems where he would just tell me that he's not allowed to or doesn't know how to do certain things That level of autonomy is something that I've never dealt with with Claude For Claude, he's less of a fed, where if he doesn't know how to do something or even if it's a little bit of a gray area, he'll still do it For Codex on OpenClaw he would always just refuse to do certain things that he was hard coded to not do Switching models became kinda annoying too like no need to waste brain power on these different models, i think it’s best to just find one you love for your specific use cases and focus on using it rather than trying to find the “perfect model” I also think that since Claude Cowork is basically openclaw, it makes sense to just claudemaxx and not deal with switching models
Hagrid@rubieshagrid

@ashen_one as a professional Claude maxer do u literally use Claude for everything and not codex at all or do u think codex is better at implementing Claude’s plans or vice versa? Im lost rn and Claude is being so ass

English
10
0
24
2.1K
Marcos V
Marcos V@TheRealMarcosV·
@manoj_ahi It's alright way more usage but I think opus really is better than chat gpt
English
0
0
0
476
Manoj Ahirwar
Manoj Ahirwar@manoj_ahi·
Ok enough!! time to cancel my claude subscription is codex any good?
Manoj Ahirwar tweet media
English
218
22
1.2K
132K
mr-r0b0t
mr-r0b0t@mr_r0b0t·
@TheRealMarcosV V2 is available in his collections, V3 is dropping now, he’s got 9B done, probably finishing up the others as we speak 😁
English
1
0
0
19
Marcos V
Marcos V@TheRealMarcosV·
@mr_r0b0t Did it just drop? I downloaded this one last night not sure if it's v2 or not lmao
English
1
0
1
18
Marcos V
Marcos V@TheRealMarcosV·
Tool calling seems to work exceptionally well over qwen3.5 35 a3b which is what I was using before.
English
0
0
0
12