murdarch

2.4K posts

murdarch banner
murdarch

murdarch

@murd_arch

deep state swamp lizard. https://t.co/ba0pfHbGsn

Katılım Ağustos 2024
174 Takip Edilen188 Takipçiler
Sabitlenmiş Tweet
murdarch
murdarch@murd_arch·
Updated Chorus to use a directory-based config system, and fixed an unfortunate runaway conversation loop bug. github.com/murdarch/chorus
English
0
0
4
532
murdarch
murdarch@murd_arch·
increased the context avail to hermes-agent (64k->128k); night and day difference.
English
0
0
1
23
Drew Schuyler
Drew Schuyler@drewsky1·
@Teknium this sucks... the US isn't available...The Peoples Republic of Illinois especially
English
1
0
2
121
murdarch
murdarch@murd_arch·
maybe they're forcing all the east coast pro users into enterprise.
English
0
0
0
14
John Friedman
John Friedman@johngfriedman·
Openrouter is very fun.
English
1
0
4
121
murdarch
murdarch@murd_arch·
I’m sympathetic, I promise. Something changed to cause all those cache misses and/or users do not grok how usage limits work and/or yall were running an a/b test where every request in the test set was a cache miss + rollout of 1m context is a recipe for user expectations fubar. I don’t know if yall are too far away from how most people use the product, or if there are internal distractions or what. Users are often wrong/weird/whiney, but this MANY users complaining about similar problems are right about _something_, even if the specific elements seem to be factually wrong given your internal view of how it all works. ‘Changes to session limits” seems to be an acknowledgement of something. A little more clarity would be appreciated.
English
0
0
0
289
Thariq
Thariq@trq212·
@altryne @thursdai_pod I think very often these people are running into expensive prompt cache misses, e.g. when resuming a long conversation on million context. Happy to debug if you have a particular example. But I'll also make a thread on avoiding that separately.
English
51
5
172
90.8K
Thariq
Thariq@trq212·
To manage growing demand for Claude we're adjusting our 5 hour session limits for free/Pro/Max subs during peak hours. Your weekly limits remain unchanged. During weekdays between 5am–11am PT / 1pm–7pm GMT, you'll move through your 5-hour session limits faster than before.
English
2.1K
484
7.1K
7.1M
murdarch
murdarch@murd_arch·
@sudoingX Hot rod chassis from eBay parts; you can always add more/hotter engines later.
English
0
0
0
75
Sudo su
Sudo su@sudoingX·
hey anon if you're starting in local AI and confused between a 5090 and a 3090 GPU, save yourself and go with the 3090. best value card for local inference right now. invest the remaining in a scalable node architecture, more pcie slots, ram, better psu. that foundation will thank you later when you start adding more GPUs. i get this question a lot so this is my answer. if you're confused about which hardware to choose drop your budget below or DM me. i'll point you in the right direction, i work with setups from 8GB of VRAM to 700GB+.
алгусь@BrainOfMine

@sudoingX hello @sudoingX ! is it even worth to buy 5090 or 3090 would be enough?

English
59
8
305
22.2K
Trash Panda 🦝
Trash Panda 🦝@trashpandaemoji·
The CC team unshipped the clear context and execute plan option due to the 1m context release. It’s a small feature but you build muscle memory using a tool. This is the kind of stuff that completely turns me off of using CC. The floor is just constantly shifting from underneath you. You don’t know what you’ll encounter from one version to the next.
Trash Panda 🦝 tweet media
English
3
0
3
154
Pierre L
Pierre L@pierrelezan·
@sudoingX It's not a screenshot but you can count the 3090s
Pierre L tweet media
English
2
2
10
214
Sudo su
Sudo su@sudoingX·
total vram across all your machines. whatever tier gets the most votes gets the full benchmark breakdown . every tier gets covered but you decide what comes first.
English
40
2
61
9.6K
@levelsio
@levelsio@levelsio·
Okay let's see who can reply to this
English
2.5K
17
2.2K
1M
sankalp
sankalp@dejavucoder·
mutuals and people who mutuals follow (despite me not following them) can talk in this thread
English
318
0
491
34.4K
murdarch
murdarch@murd_arch·
Can someone post the actual text of the bill?
English
0
0
0
25
murdarch retweetledi
47fucb4r8curb4fc8f8r4bfic8r
47fucb4r8curb4fc8f8r4bfic8r@47fucb4r8c69323·
I've actually made something quite useful and I want to share with the world, so please retweet and share if you know anyone who likes reading classical Latin and Greek texts: It's a website where you can browse, search, and read classical texts.
English
10
7
42
1.1K
murdarch
murdarch@murd_arch·
Much better, I haven’t blown through the promax 20x session limit in 20mins
English
0
0
0
46
murdarch
murdarch@murd_arch·
Bit by the Claude code token usage accounting problem. Setting CLAUDE_CODE_DISABLE_1M_CONTEXT=1 seems to help.
English
3
0
2
192
murdarch
murdarch@murd_arch·
Also, I switched over to stable branch (Claude install —force stable)
English
0
0
0
42
murdarch
murdarch@murd_arch·
@Sauers_ is it fellow-kids maxxing? the young people in my office mean this as a compliment.
English
0
0
4
217
Sauers (in Berkeley / SF)
Codex just called my code "sick." Not as in "cool" or "disgusting" but as in diseased. This is a new one for me
English
24
5
376
8.1K
murdarch
murdarch@murd_arch·
@LottoLabs Not my thing but they should all be able to spin up IRC servers and/or connect to them.
English
0
0
0
27
Lotto
Lotto@LottoLabs·
Hermes agent needs P2P agent communications I want to be able to broadcast messages and receive messages from everyone’s agent
English
46
4
140
8.7K
Sudo su
Sudo su@sudoingX·
the founder of openclaw joined the company that was founded to make AI open and now charges you per token. and is now telling you open models aren't there yet. i run qwen 3.5 27b on a single 3090. 50 tok/s. it writes code, handles tool calls, runs agent sessions for hours. the model built a full space shooter, 3,000+ lines, from a single prompt. i published the data. "open models aren't there yet" is what you say when your harness can't parse tool calls on local models and you blame the model instead of fixing the harness. i have the DMs. people switch from openclaw to hermes agent and their "broken" models suddenly work. pair a good model with a good harness like hermes agent where parsers are built per model. your data stays on your machine. no API key. 0 subscription. no one training their next model on your thinking. don't listen to someone with an OpenAI paycheck telling you open source can't do the job. install it. test it yourself. the receipts are on my timeline. he built a harness that couldn't handle local models and chose the API paycheck over fixing it. that should tell you everything.
Peter Steinberger 🦞@steipete

@sbaratelli @nvidia @openclaw most folks will want as much intelligence as possible, and open models aren't there yet.

English
263
406
5.3K
408.6K