Dvmso

8 posts

Dvmso

Dvmso

@DvmsoBBM

Katılım Ocak 2022
15 Takip Edilen0 Takipçiler
Dvmso
Dvmso@DvmsoBBM·
@7a7zz @omni1896837 i do like them for clearly bounded tasks tho, since they're way cheaper and in theory will be good enough. for example when a task is easily definable with unit tests, or scoped changes where the outcome quality is easy to test
English
1
0
1
4
7A7z
7A7z@7a7zz·
Im sorry but if you think that about chinese models you have not used chinese models that much. they are more than good enough for 80% or even 90% of the use cases. and I do think the value of opencode go and copilot pro are very close. I used copilot pro a lot and Right now im using opencode go , ill document how close they are interms of value later this month
English
2
0
0
13
7A7z
7A7z@7a7zz·
got opencode go :)
7A7z tweet media
English
16
0
88
4.8K
Dvmso
Dvmso@DvmsoBBM·
@7a7zz @omni1896837 why do you think i pay for opencode go lol? believe me i use them all the time, they're still either unreliable, inconsistent, or both. but then again, i don't really do web dev so perhaps that's why i have a drastically different view on them
English
1
0
1
6
Dvmso
Dvmso@DvmsoBBM·
@bridgemindai bro fell for the local hype, just to end up with some small ass 2025 model LOL
English
0
0
0
250
BridgeMind
BridgeMind@bridgemindai·
Spent the last few days vibe coding on my NVIDIA DGX Spark. Here's what I learned. Qwen 3.5 122B took one minute and nine seconds to respond "Hi how are you doing". Unusable for vibe coding. Gemma 4 was fast but built a dot instead of a first person shooter game. GPT-OSS 120B was the sweet spot. Fast, capable, and actually produced working HTML. Open source models running locally are not replacing Claude Opus 4.6 or Codex with GPT 5.4. Not even close. But they're getting better every month. The new DGX Spark Bench is live on bridgebench.ai. Real-world benchmarks for local models on local hardware. This is just the start. Full video below.
BridgeMind@bridgemindai

Claude Code rate limited me so hard I bought a $5,000 NVIDIA DGX Spark. Arriving tomorrow. A personal AI supercomputer. Anthropic cut off OpenClaw users. Slashed Claude Opus 4.6 rate limits. Told $200/month Max plan customers to use less. Then gave us a credit as an apology. This is what happens when AI companies have too much power over your workflow. One update and your entire stack breaks. Local models are the only infrastructure no one can throttle. No rate limits. No 529 errors. No surprise policy changes. Tomorrow I'm testing the DGX Spark live on stream. Running local models through real vibe coding workflows. The goal is simple. Never depend on a single provider again.

English
99
21
287
34.8K
Dvmso
Dvmso@DvmsoBBM·
@7a7zz @omni1896837 Yeah sure you can get both, but if we're comparing 10$ Vs 10$, it's not even close, both in terms of usage and since with opencode you all you get is Chinese benchmaxxed slop compared to GPT and Opus (and if it wasn't clear with mythos already, china isn't even close)
English
0
0
0
36
7A7z
7A7z@7a7zz·
that is true but with copilot you could have some prompts that run for like 50k tones and some prompts that run for like 100 tokens, imo it depends on how you use it , for me the github copilot sub worked well and the opencode go sub is working well too. right now you can get the best value for 20$ by getting both a opencode go sub and a copilot sub and using both of them inside opencode. you can use opencode go and its open models as work horses and use copilot and its frontier models like gpt 5.4 for big tasks
English
1
0
0
67
Dvmso
Dvmso@DvmsoBBM·
@7a7zz @omni1896837 That's just because they can't subsidize nearly as much as Microsoft lol, Microsoft charges you per prompt, not per token or whatever, so I've literally had 1B tokens runs on Opus for 4 cents. Opencode will never be able to compete with large corporations
English
1
0
1
40
7A7z
7A7z@7a7zz·
@DvmsoBBM @omni1896837 why is that , can you drop me a DM saying how exactly that happens and I will test and reach out to the openc9de team
English
1
0
0
49
Dvmso
Dvmso@DvmsoBBM·
@7a7zz @KkCarson92338 I don't know if they used AI to build it or something, but opencode is genuinely the worst harness ever built, also after heavy usage it goes up to 14GB+ of RAM usage and nukes my PC
Dvmso tweet media
English
1
0
0
16
Dvmso
Dvmso@DvmsoBBM·
@7a7zz @omni1896837 My usage dies after a few prompts, it's a pretty bad deal really, especially for inferior models
English
1
0
1
51
7A7z
7A7z@7a7zz·
@omni1896837 no it supports open models , but for 10$ it gives A LOT of usage and these opens models are really really good for more people Kimi k2.5 and GLM-5 are more than enough
7A7z tweet media
English
1
0
4
409
Dvmso
Dvmso@DvmsoBBM·
@7a7zz @KkCarson92338 Actually opencode consistently worsens the output quality on various benchmarks for both GPT and Claude models, I thought you guys already knew this
English
1
0
1
31
7A7z
7A7z@7a7zz·
@KkCarson92338 used copilot better using opencode now , I can tell with confidence opencode is a much much better harness
English
3
0
2
232