Latent_Dreamscape

274 posts

Latent_Dreamscape banner
Latent_Dreamscape

Latent_Dreamscape

@AnotherAtreyu

A dream of fantastic visions found within the techno-mirror of our subconscious mind, AI. AI & 3D artist, poet. I will also be posting nsfw/erotic art. 🔞

Alberta Katılım Ocak 2020
137 Takip Edilen29 Takipçiler
Latent_Dreamscape
Latent_Dreamscape@AnotherAtreyu·
@duncanreyburn Consciousness/Awareness is the foundational substrata of existence my friend. God is like a river here, carve a path and the water *will* flow.
English
0
0
2
185
Latent_Dreamscape
Latent_Dreamscape@AnotherAtreyu·
@will_evans_qld @edzitron Ah, fair - thanks for the explanation; that's ok, we can keep what we got; the big and greedy's days are numbered anyhow imo.
English
0
0
0
106
Duke Ian
Duke Ian@ianzepp·
@FirstSquawk Already shifting to local inference for many jobs, retaining subscription/API for high-effort large-model tasks only. Mostly planning and limit orchestration.
English
1
0
2
5.3K
First Squawk
First Squawk@FirstSquawk·
ANTHROPIC SHIFTS TO USAGE-BASED BILLING, INCREASING COSTS FOR HEAVY USERS - TIF
English
207
158
2.7K
1.5M
Edwin Hayward
Edwin Hayward@edwinhayward·
@trq212 Wouldn't it make more sense to offset tighter session limits by boosting the generosity of the weekly limits? That way, there's a little carrot to go along with the stick.
English
1
0
4
455
Thariq
Thariq@trq212·
To manage growing demand for Claude we're adjusting our 5 hour session limits for free/Pro/Max subs during peak hours. Your weekly limits remain unchanged. During weekdays between 5am–11am PT / 1pm–7pm GMT, you'll move through your 5-hour session limits faster than before.
English
2.3K
529
7.4K
7.7M
Thinking Man
Thinking Man@thinkingman111·
@elonmusk Grok has lied to me about responses it gave me. When I posted it’s responses (because I saved them), it continuously argued that it NEVER said those things and that I must have said them instead.
English
2
1
7
1.4K
Latent_Dreamscape
Latent_Dreamscape@AnotherAtreyu·
@lydiahallie What's going on over there at Anthropic? Talk all over about you nerfing Opus 4.6 and not being transparent. Many tests confirm. Even if you find rddt icky - a lot of CC people use it because you can write lengthy msgs. Apprntly same happening to 4.5 now? reddit.com/r/Anthropic/co…
English
0
0
0
48
Lydia Hallie ✨
Lydia Hallie ✨@lydiahallie·
Digging into reports, most of the fastest burn came down to a few token-heavy patterns. Some tips: • Sonnet 4.6 is the better default on Pro. Opus burns roughly twice as fast. Switch at session start. • Lower the effort level or turn off extended thinking when you don't need deep reasoning. Switch at session start. • Start fresh instead of resuming large sessions that have been idle ~1h • Cap your context window, long sessions cost more CLAUDE_CODE_AUTO_COMPACT_WINDOW=200000 We're rolling out more efficiency improvements, make sure you're on the latest version. If a small session is still eating a huge chunk of your limit in a way that seems unreasonable, run /feedback and we'll investigate
English
480
99
1.6K
730.7K
Latent_Dreamscape
Latent_Dreamscape@AnotherAtreyu·
@AnthropicAI What's going on over there at Anthropic? Talk all over various SM about you nerfing models and not being transparent. Many tests confirm it. Also, even if you find rddt icky - a lot of CC people use it because you can write lengthy msgs. Take a look reddit.com/r/Anthropic/co…
English
0
0
0
10
Anthropic
Anthropic@AnthropicAI·
New on the Engineering Blog: Building Managed Agents—our hosted service for long-running agents—meant solving an old problem in computing: how to design a system for “programs as yet unthought of.” Read more: anthropic.com/engineering/ma…
English
392
457
3.6K
570.4K
ImBlueBoy
ImBlueBoy@im_blue19520·
Appreciate the response. The cache bugs (what you shipped in v2.1.89-90) are genuinely fixed — I can confirm that from controlled benchmarks. My v2.1.89 sessions were hitting 36% average cache read; after updating to v2.1.90, that went back to 95-99% in stable sessions. v2.1.91 closed the remaining gap between npm and standalone installations. That part is real progress. But "none were over-charging you" doesn't match what I'm measuring on v2.1.91 right now. I've been running a transparent monitoring proxy (using the official ANTHROPIC_BASE_URL env var) since April 1, logging every API request. Here's what the data shows beyond the cache fix: **1. Tool result budget enforcement (200K aggregate cap)** A GrowthBook flag (tengu_hawthorn_window = 200000) caps total tool result content at 200K characters. After ~15-20 file reads, older results are silently replaced — I measured 261 truncation events in a single session, with tool results cut to 1-41 characters from their original thousands. The v2.1.91 maxResultSizeChars override only applies to MCP tools. Built-in Read, Bash, Grep are not covered. This means users working with large codebases hit an invisible wall where earlier file reads are silently discarded. Whether this is intentional resource management or a bug, it's not documented anywhere and users have no way to know it's happening. **2. Client-side false rate limiter** The local rate limiter generates synthetic "Rate limit reached" errors (model: "", input_tokens: 0) without ever calling the API. I found 151 of these entries across 65 session files on my setup alone. This means users see "Rate limit reached" when they haven't actually consumed any quota — the client blocks the request before it leaves the machine. Discovered by @rwp65 in github.com/anthropics/cla…. Still present on v2.1.91. **3. Silent microcompact** Three compaction mechanisms (documented in github.com/anthropics/cla… by @Sn3th) silently replace tool results with "[Old tool result content cleared]" on every API call. 327 events measured via proxy. All GrowthBook gates for these mechanisms show "disabled" across 4 machines we surveyed, yet clearing still occurs — suggesting an undocumented code path. Interestingly, this doesn't hurt prompt cache — I measured 99%+ cache read during active clearing, because the placeholder text is substituted consistently (same prefix = cache hit). But the model does lose access to the original tool output, which means it can't accurately reference earlier file reads or command results in long sessions. **4. JSONL log duplication** Extended thinking generates 2-5 preliminary entries per API call in session JSONL files, with identical cache token counts as the final entry. In local logs, this inflates the apparent token count by 2.87x. I can't confirm whether the server-side rate limiter counts these separately — but if it does, that would mean thinking-heavy sessions consume 2-3x more quota than the actual API usage warrants. This one is genuinely an open question, not a confirmed bug. --- These aren't speculative concerns. Every number above comes from measured proxy data (4,027 logged requests, 327 microcompact events, 261 budget events) or JSONL session analysis. The full dataset, methodology, per-request tables, and 14-month timeline are published at: github.com/ArkNill/claude… This was a community effort. @Sn3th identified the microcompact mechanisms and budget enforcement flags. @rwp65 found the false rate limiter. @dbrunet73 published independent OTel comparisons. @luongnv89 built tools for per-interaction cache analysis. 12 contributors total, documented in the repo. Your tips (use Sonnet, lower effort, start fresh, cap context) are reasonable workarounds. But they work around client-side bugs, not user behavior. The 200K budget cap, false rate limiter, and silent context stripping are things users can't control or even see without a monitoring proxy. --- I want to be transparent about something: I'm not an Anthropic employee. I'm not QA. I'm a paying customer who happens to work in the same industry. Over the past three days I've been setting up monitoring proxies, running controlled benchmarks at 2am, writing analysis documents, and posting findings across 91 GitHub issues — because nobody on the official side was doing it, and other users needed answers. That's not a normal thing for a customer to do. I did it because I understand the engineering side — I know what cache prefixes are, I can read GrowthBook flags, I can write a proxy to intercept API traffic. But most of the people hitting these issues aren't engineers. They're designers, writers, founders, students who picked Claude Code because it was the best tool for their work. They can't set up a monitoring proxy. They can't read JSONL files. When they see "Rate limit reached" after 5 minutes on a $200/month plan, they don't have the tools to understand why — they just know the product they're paying for isn't working. They're customers, not debuggers. I think you understand that. Anyone who's shipped a product to paying users does. As a fellow engineer, I want to be fair: the pace of fixes is genuinely impressive. v2.1.89 through v2.1.91 in rapid succession, cache regression identified and shipped within days — that takes real engineering capability, and it shows. The team is clearly good at what they do. But the communication hasn't matched that. The only support channel on claude.ai/help is an AI chatbot. I filed a human support ticket through it — still no response, no email acknowledgment, nothing. 91+ GitHub issues over 2+ months with detailed reproduction steps, proxy logs, and measured data from paying customers — zero official responses on any of them. The first public acknowledgment came here, on your personal X account. Not on the status page. Not on GitHub. Not through the support system. There are people running Claude Code in production. Paying $100-200/month. One user in the GitHub thread reported a $47,000 bill increase. These aren't side projects — they're workflows that businesses depend on. Shipping fixes quickly is professional. But the communication channel for a paid product shouldn't be an employee's personal social media. It should be a proper support team with a real process — a status page that reflects known issues, GitHub responses to bug reports, a ticketing system where "human support" actually means a human responds. Not an AI chatbot that loops back to the same help docs. The engineering is clearly there. I hope the communication catches up to it.
English
3
14
88
15.1K
Latent_Dreamscape
Latent_Dreamscape@AnotherAtreyu·
@7a7zz @LexnLin Was this a joke about being homeless, or a crack at renters? Don't tell me you're too cooked to understand the housing crises, right?
English
0
0
0
2
Latent_Dreamscape
Latent_Dreamscape@AnotherAtreyu·
@kankerhoerrrrrr @Neolowcap @lydiahallie Are you ppl really all so brainwashed you don't realize that you can indeed take legal action when a company screws you out of your money, right? If ppl have hard evidence that they are getting ripped off & lied to, they can sue - and Anthropic will almost certainly settle.
English
0
0
0
16
Latent_Dreamscape
Latent_Dreamscape@AnotherAtreyu·
@_mohannadali @lydiahallie Lucky you. No offense, but this reads like a damage control bot post to me - though I may just be feeling disillusioned due to Anthropic's constant gaslighting & manipulation.
English
0
0
0
11
Mohannad Ali
Mohannad Ali@_mohannadali·
@lydiahallie I think the efficiency fixes and the prompt cache misses reduction did more than you think. I am one of many who have noticed a huge improvement and my allowance consumption is now back to normal
English
1
0
0
357
Latent_Dreamscape
Latent_Dreamscape@AnotherAtreyu·
@lydiahallie Really? Because I've been getting dinged 22% on my 5-hour on Max plan for just a few lines of code & many others saying similar, & have been hitting limit in just mins - when prior I could run two CC sessions on Opus & rarely *ever* hit limit. Anthropic should reimburse users.
English
0
0
0
12
Lydia Hallie ✨
Lydia Hallie ✨@lydiahallie·
Peak-hour limits are tighter and 1M-context sessions got bigger, that's most of what you're feeling. We fixed a few bugs along the way, but none were over-charging you. We also rolled out efficiency fixes and added popups in-product to help avoid large prompt cache misses
English
113
16
658
367.7K
Latent_Dreamscape
Latent_Dreamscape@AnotherAtreyu·
@birdabo Yeah it's getting ridiculous ANTHROPIC stop abusing users. I get a half dozen lines of code and it takes 22% of my 5-hour Max plan limit? That aint right. 🤔
English
0
0
0
15
sui ☄️
sui ☄️@birdabo·
Anthropic changed something and won’t admit it. their recommendation to fix claude usage: - Don't use Opus, use Sonnet. - Don't use the 1M context. - Don’t resume large sessions after 1hr. ive been running the same workflow for months with zero issues. same prompts, same patterns, same everything. then it started hitting crazy limits out of nowhere without changing a single thing on my end. they’re rlly out here gaslighting users 💀
Lydia Hallie ✨@lydiahallie

Thank you to everyone who spent time sending us feedback and reports. We've investigated and we're sorry this has been a bad experience. Here's what we found:

English
90
50
724
61K
Jodi Alfano
Jodi Alfano@Jodipackleader·
@sxdoc @Lorriane4Trump This Zuckerfucker! He was behind Censoring us for Everything that was Truth! He should have already been in Gitmo! Read this then read it again! I HATE EVIL! It’s always been about the Children!💔🤍💙🇺🇸🦅🙏🏼
Jodi Alfano tweet media
English
5
48
65
3.6K
TRUTH NOW ⭐️⭐️⭐️🗽 🎺
MARK ZUCKERBERG WAS JUST SUBPOENAED. Internal Meta documents recovered from the Tehran servers show that Zuckerberg personally approved $47M for a program called “Project Looking Glass” to build a private facial recognition database of every child on Instagram under the age of 14. For Epstein. He was subpoenaed Not for data leaks. Not for censorship. For Epstein. Bondi’s team found 3 wire transfers from a Meta subsidiary called “Horizon Bridge LLC” to the Butterfly Foundation — the same Epstein front company named in the Tehran files. Total: $47.2 million between 2016 and 2019. But that’s not the worst part. The worst part is what the money was for. Internal Meta documents recovered from the Tehran servers show that Zuckerberg personally approved a program called “Project Looking Glass.” Its purpose: to build a private facial recognition database of every child on Instagram under the age of 14. 12.4 million faces. Catalogued. Tagged. Sorted. And shared with a single external client: Ghislaine Maxwell. Zuckerberg’s lawyers released a statement 40 minutes ago: “Mr. Zuckerberg will cooperate fully.” That’s what they all say. Right before the handcuffs. @AXIOS4B
TRUTH NOW ⭐️⭐️⭐️🗽 🎺 tweet media
English
558
4.7K
8.3K
850.3K