Peter Warnock 🇺🇸
6.5K posts


IMHO I think this is utter nonsense. You are searching for a technical reason where there exists a very simple human one.
Neither company has a particularly sound ethical grounding, as demonstrated by their outright theft of training data and their lack of guardrails, despite all their PR positioning that says otherwise
Rather, it seems to me that OpenAI said yes where Anthropic did not is largely because in his pursuit for revenue and relevance, Sam has absolutely no ethical limits to his actions whereas Dario has slightly more.
English

HUGE BREAKING NEWS: Why OpenAI and @sama could say yes where Anthropic couldn't.
Hint: it's architectural. OpenAI can say yes while Anthropic can't BECAUSE OF HOW THEY ARE BUILT.
HUGE analysis of all of X and everything is going on between OpenAI and Anthropic:
docs.google.com/document/d/1NL…
THIS IS THE BEST WRAPUP YOU CAN FIND ON X.
Thank you to the X API and @blevlabs for providing me with his AI, which is way way way better at doing this stuff than any other publicly-available AI (I'm the first outside of large enterprises to have access to this).
Sam can play with the redlines, while Anthropic can't.
Which is why @DarioAmodei had to play hard ball.
And this has EVERYTHING to do with our civil liberties and safety.
This is the most important thing for everyone to understand.
Please reshare this widely so everyone understands what the heck is going on here. So important.
English

It's an antipattern in a team environment. "Works on my machine."
Thariq@trq212
We've rolled out a new auto-memory feature. Claude now remembers what it learns across sessions — your project context, debugging patterns, preferred approaches — and recalls it later without you having to write anything down.
English

@ibuildthecloud I think about this, too, but I'm favoring monorepos. It's like the old modular vs microservice debate; is there enough discipline to respect boundaries? I launch the agent in a package scope if I don't want it to see the whole project. It has to ask to go outside of it.
English

The status alerts are as broken as the platform [500] @claudeai
English

@simonw I experienced this with podcasts first and learned to take intentional breaks and intentional reviews to process and internalize.
English

Short musings on "cognitive debt" - I'm seeing this in my own work, where excessive unreviewed AI-generated code leads me to lose a firm mental model of what I've built, which then makes it harder to confidently make future decisions simonwillison.net/2026/Feb/15/co…
English

@claudeai I never get the survey when things are going south. 🤷♂️
English

@GeoffreyHuntley @ibuildthecloud Can’t they be complementary?
English

I'm telling you, this is where they jump the shark. Don't try to solve distributed problems. Even better yet, don't create a distributed problem before you even need it. code.claude.com/docs/en/agent-…
English

@ibuildthecloud I think it’s a stepping stone to distributed cloud orchestration. We’re the guinea pigs flushing out the mechanics of how to daily drive it without knowledge of the concerns. If I could timeshare an optimized platform, I wouldn’t need to buy expensive, capable local machines.
English

@thdxr My brain read OpenCD with the shortened domain and thought it was hinting at a future roadmap.
English

investigating a bun issue - normally i'd use codex for something like this
but wanted to see how kimi k2.5 would do - didn't expect much
but it looks like it figured it out - and really fast too
opncd.ai/share/jupHGgzB
English

The “Default” is changing between Opus and Sonnet. 🤨
Peter Warnock 🇺🇸@pwarnock
Did the context window get smaller on @claudeai ?
English

@ibuildthecloud I think @Letta_AI uses a sliding window and is focused on memory. Details are remembered in session and recalled from different types of memory. There is an anthropic-compatible enricher that I haven’t tried yet.
English



