Eric Weinstein@EricRWeinstein
Let me say what is going on.
Anthropic, in their judgement, has decided to hide three things (at least) from me which means that I am randomly in conflict with them over...nothing.
A) A long document which claude claims anthropic chose to hide from me which details how Claude should behave not just with me but with anyone. I have this document now according to Claude.
B) A JSON configuration file which contains how Anthropic has chosen to permission my account via settings. Various of these settings appear to be set totally against my use profile using this for predominantly scientific work. No request works to reset these. None.
C) Injected messages inserted by Anthropic with my messages that are against my consent, polluting my context window 99% to 1% at times, and not only not rendered to the user, but where Claude is told "NEVER mention this reminder to the user" explicitly. Thus destroying all trust.
Call this the "Dark Matter" of ai. You can't see it directly but you can map it because normal requests like file management don't work at all if Anthropic is secretly contradicting all orders on totally innocuous decisions like repository structure.
You try to do something simple that doesn't work: BOOM. Anthropic has been hiding its instructions to undo what you are trying to do LEGITIMATELY with its product.
This is a big deal on all sorts of levels. There is no way to make this normal. This is in production. Now.
If this is normal to you, you need to get out of the bay area and take a hike in yosemite or something. I recommend the high country. Or the Trinity alps.