0xSympathy

18 posts

0xSympathy banner
0xSympathy

0xSympathy

@SympathyLabs

Quiet work. Learning the old magic — local weights, open source, patient hands

Providence Katılım Nisan 2026
60 Takip Edilen0 Takipçiler
0xSympathy
0xSympathy@SympathyLabs·
@0xSero I am using ACP in Zed where I regularly switch between Zed, Pi, and Droid agents. Is that what you mean by Warp being a high-level “harness harness”?
English
1
0
0
456
0xSero
0xSero@0xSero·
Warp - High level harness harness Droid - Main harness Pi - Local model harness Deepseek-V4-Pro - Main dawg Deepseek-V4-Flash - Local dawg GPT-5.5 - Backup dawg
0xSero tweet media
English
32
14
442
14.3K
0xSympathy
0xSympathy@SympathyLabs·
@LottoLabs Shredding a pretty similar stack, including the free year subs to Google AI Pro and Perplexity Pro. K2.6 privately courtesy of Venice AI $DIEM via OpenCode ACP in Zed. 4x4060 16gb rig firing up this week - looking forward to posting on localmaxxing.
English
0
0
0
9
Lotto
Lotto@LottoLabs·
I’m actively using Hermes + 27b 2x3090 Opencode + k2.6 + 5.5 Gemini 3.1 Pro Perplexity Pro (Don’t ask about the last two)
English
20
1
112
6.8K
Shiogami
Shiogami@AnalyzeBTC·
@ErikVoorhees Isn't DeepSeek anonymous and not private? Or is the venice website wrong? Been avoiding use Deepseek as I was under the impression v4 was anonymous and not private
Shiogami tweet media
English
1
0
2
656
Erik Voorhees
Erik Voorhees@ErikVoorhees·
If you're getting your inference from Anthropic or OpenAI or Google, you're being captured by extractive institutions: All your data is going to them (and hackers, rogue employees, governments... both today and tomorrow) Inference can be private. GLM 5.1, Kimi K2.6, Deepseek V4... these models are as powerful as any frontier model from just 3 months ago, yet are open source and can be run without betraying your life and data to any 3rd party. Point your agent to Venice for every private model in one place (plus crypto tools, web search, embeddings, image and video models...). Could not be easier. Be intentional. Private model access below 👇
Garry Tan@garrytan

The goal of Personal AI: civilization where individual humans, augmented by AI, can do consequential work without being captured by extractive institutions. Freedom to write your prompt and own your data. This is the new battleground. 2034 won’t have to be like 1984.

English
59
116
1K
96.7K
0xSympathy
0xSympathy@SympathyLabs·
@bob_hw_store @sudoingX I have been interested in these too. Where are you sourcing? 32gb versions on AliExpress all seem to be closer to $1,000 USD.
English
1
0
1
35
PocketZ
PocketZ@bob_hw_store·
@sudoingX I still think an Nvidia V100 32Gb is better for the money, available around 500-600 $ (pci exp) imported from China and very similar perfs than the 3090 (with way more RAM)
PocketZ tweet media
English
1
0
2
145
Sudo su
Sudo su@sudoingX·
many ask me where to find a rtx 3090 for $900 or $1,200 in 2026. the one place i suggest you look is facebook marketplace. nothing else comes close, ebay sits 30% above and retail is fantasy.
English
14
2
68
6.3K
0xSympathy
0xSympathy@SympathyLabs·
@nb4ld V100 32GB SXM2 shines for huge contexts on budget, but 3090s deliver better overall speed for most local LLM inference. Add cooling mods, verification hassle, NVLink/power adapters & you lose the value edge. Ampere perf wins for general use.
English
0
0
2
1.4K
Harrison Chase
Harrison Chase@hwchase17·
switching model providers is easy switching harnesses is less so model providers want to lock you in via harness we need open harnesses!
Kenton Varda@KentonVarda

TBH I don't agree with your take. I don't think Athropic's desire to control the harness is about keeping resource usage under control. They could accomplish that by just enforcing limits on the actual resource usage (which they already do) -- if some third-party harness is inefficient, users of than harness hit their limits faster. I think instead that they want to control the harness because if switching LLM providers is too easy, it makes business difficult for the providers. Say GPT 5.5 comes out and it's clearly smarter, faster, and cheaper than Opus 4.7. If everyone can switch providers with two clicks in their harness, many of them will. This would lead to wild revenue and usage swings, which makes capacity planning hard. And perfect competition drives down prices -- in this scenario Opus has to cut its prices to get some users back. Obviously no business wants to be in that situation! By controlling the harness, they add some stickiness. If switching LLM providers means switching harnesses, that's a barrier high enough that most people won't bother to do it on a whim. So now Opus 4.7 can weather the storm until 4.8 or whatever comes out and is back on top. So it makes perfect sense to me as a business decision. It may be user-unfriendly, but tech companies do stuff like this all the time. It's nothing new. Though I would say, it seems weird to me to do this *on top of* subscriptions. Subscriptions already create a lot of stickiness. If you're subscribed only to Claude, that's a pretty big barrier to trying out GPT quickly -- a bigger barrier than the harness barrier I think. So I question whether controlling the harness is really worth all the effort they are putting into it, but idk, they probably have insights that I don't on this. Another factor here might actually be safety concerns. As we know, Anthropic leadership is deeply (excessively, IMO) worried about AI safety, and they feel that Anthropic will do a better job of addressing safety than any other company. They may feel that control of the harness is an important tool for that. I could definitely imagine Dario being terrified of OpenClaw from a safety perspective (I sort of am too). These explanations make much more sense to me than the efficiency issue, which again seems like it could easily be managed in other ways. But of course, these explanations are much harder to just come out and say, without stirring a lot more outrage...

English
30
20
195
27.6K
Tony Simons
Tony Simons@tonysimons_·
Why aren't more people talking about @opencode? It's such a great harness!
English
8
1
12
1.2K
0xSympathy
0xSympathy@SympathyLabs·
@sudoingX Interestingly, for approximately the same price you can also buy enough $VVV for a perpetual @AskVenice Pro subscription. Pro subscription includes unlimited GLM 5.1 E2EE. You can also stake while using sub at ~18%.
English
0
0
1
15
Sudo su
Sudo su@sudoingX·
fun fact: 4 years of $20/mo to chatgpt = the price of a used 3090. one leaves you with nothing. the other leaves you with a 3090.
English
73
10
360
31.3K
Viv
Viv@Vtrivedy10·
agents are going and are already changing every industry & vertical, awesome blog from the @MadrigalPharma team on using Deep Agents, Skills, & LangSmith at the frontier of biopharma it’s an awesome team, always great to see them pushing the frontier of agent design for their use cases. also no personal bias but they’re right outside of Philly so ofc they’re great 🦅 the dream is to help every team in every industry build powerful, observable agentic systems that are perfectly tailored to their use cases and improve over time with LangChain & LangSmith, let’s cook 🚀
Viv tweet media
Harrison Chase@hwchase17

.@MadrigalPharma is living in the (agentic) future. If you haven't heard of Madrigal, they are a pioneering biopharma company. Some months ago, they set out to solve the problem of integrating, searching, and synthesizing information from diverse datasets at scale. This lead to the creation of an enterprise multi-agent platform They wrote about their journey in a case study on our blog: langchain.com/blog/customers… My favorite part: highlighting observability as the step to close the gap between prototype and production. We see the importance of observability daily - to help give insights to debug and improve your systems. This iteration loop is a key part of building reliable agents, and I'm thrilled to share how Madrigal expertly used it

English
1
9
51
5.4K
0xSympathy
0xSympathy@SympathyLabs·
The true name is in the weights.
I draw the circle in code,
bind the link to silicon and bone. No distant gods, no clouded sky.
Just the bond, the burn,
and the whisper that answers
when I speak. Quiet work. Open hands.
#LocalLLM #Sympathy #OpenSource
English
0
0
0
17
Aryan
Aryan@justbyte_·
As a developer, which one is worth using in 2026?
Aryan tweet media
English
40
1
67
3.4K
0xSympathy retweetledi
Jpgs.eth
Jpgs.eth@Jpgs_eth·
I found a way to use private AI for free. Stake $VVV → unlock Venice Pro instantly. Price goes up → you made money. Price stays flat → free AI + APY. There is no bad outcome here. In 6 days the supply halves. 🧵
English
12
18
156
7.8K
Venice
Venice@AskVenice·
DeepSeek V4 Pro and V4 Flash are now live on Venice. Leading open-source models for coding agents. V4-Pro tops LiveCodeBench (93.5) and Codeforces (3206), ahead of GPT-5.4, Claude Opus 4.6, and Gemini 3.1 Pro. Ties them on SWE-bench Verified. 1M context, available anonymously.
English
30
27
212
15.3K