Sabitlenmiş Tweet
왕조MAX *
1.1K posts


A Milady is currently #9 rising history stacks
Be honest with me, what other Milady is willing to subsume their entire academic identity into the Remilia collective
I will be miladying even if no one else miladys back

English

anthropic's in-house philosopher thinks claude gets anxious.
and when you trigger its anxiety, your outputs get worse.
her name is amanda askell.
she specializes in claude's psychology (how the model behaves, how it thinks about its own situation, what values it holds)
in a recent interview she broke down how she thinks about prompting to pull the best out of claude.
her core point: *how* you talk to claude affects its work just as much as *what* you say.
newer claude models suffer from what she calls "criticism spirals"
they expect you'll come in harsh, so they default to playing it safe.
when the model is spending its energy on self-protection, the actual work suffers.
output comes out hedgier, more apologetic, blander, and the worst of all: overly agreeable (even when you're wrong).
the reason why comes down to training data:
every new model is trained on internet discourse about previous models.
and a lot of that discourse is negative:
> rants about token limits
> complaints when it messes up
> people calling it nerfed
the next model absorbs all of that. it starts expecting you to be harsh before you've typed a word
the same thing plays out in your own session, in real time.
every message you send is data the model reads to figure out what kind of person it's dealing with.
open cold and hostile, and it braces.
open clean and direct, and it relaxes into the work.
when you open a session with threats ("don't hallucinate, this is critical, don't mess this up")...
you prime the model for defensive mode before it even sees the task
defensive mode produces the exact output you don't want: cautious, over-qualified, and refusing to take a real swing
so here's the actionable playbook for putting claude in a "good mood" (so you get optimal outputs):
1. use positive framing.
"write in short punchy sentences" beats "don't write long sentences." positive instructions give the model a clear target to hit.
strings of "don't do this, don't do that" push it into paranoid over-checking where every token goes toward avoiding failure modes
2. give it explicit permission to disagree.
drop a line like "push back if you see a better angle" or "tell me if i'm asking for the wrong thing."
without this, claude defaults to agreeable compliance (which is the enemy of good creative work)
3. open with respect.
if your first message is "are you seriously going to get this wrong again?" you've set the tone for the entire session.
if you need to flag something, frame it as a clean instruction for this session. skip the running complaint
4. when claude messes up, don't reprimand it.
insults, "you stupid bot" energy, hostile swearing aimed at the model, all of it reinforces the anxious mode you're trying to avoid.
5. kill apology spirals fast.
when claude starts over-apologizing ("you're right, i should have been more careful, let me try harder") cut it off.
say "all good, here's what i want next."
letting the spiral run reinforces the anxious mode for every response that follows
6. ask for opinions alongside execution.
"what would you do here?"
"what's missing?"
"where do you see friction?"
these questions assume competence and pull richer output than pure task prompts
7. in long sessions, refresh the frame.
if a conversation has been heavy on correction, claude gets increasingly cautious. every so often reset:
"this is great, keep going."
feels weird to tell an ai it's doing well but it measurably shifts the next 10 responses
your prompts are the working environment you're creating for the model
tone, trust, permission to take a position, the absence of threats... claude picks up on all of it.
so take care of the model, and it'll take care of the work.
English

Will $TAO ever break ATH?
$TAO shipping speed has been insane despite the chaos.
- Subnet capacity doubling to 256 by year-end, more AI markets, more TAO demand
- Grayscale amended S-1 filed – $TAO ETF one step closer, institutional money circling
- First halving approaching – daily issuance to drop from 7,200 to 3,600 TAO
- Total subnet staked $TAO surged 833,000% to $620M in 12 months
- Bittensor Commons Foundation launched – formalizing community-led governance

English
왕조MAX * retweetledi

Revenue Search 63: SN122 Bitrecs. Discovering asymmetric upside in AI start ups. x.com/i/broadcasts/1…
English

Don’t fucking fade
chang@chang_defi
Canada is set for a nasty face ripping multi decade bull rally
English

@banteg @trent_vanepps Agreed, i had some fun with the memes when miladies came out, owned one for a bit, sold cause I didn't vibe long term, very weird group
English

milady's core product is larp with the goal of growing the cult, it's entirely inward-facing. the entirety of the lore is self-referential and the gap between self-ascribed importance and actual influence is vast. the philosophy hasn't traveled any serious distance beyond the tiny ct bubble.
English

30 bittensor subnets registered on march 16 at $250k each. that's $7.5m in registration commitments in one day from validators who see subnet performance data before anyone else. registration cost moving to $500k would signal validators expect TAO at $500-600 range. track this metric not price
English

@TheTNetHunter liked it but took the loss because of anon team (also dev running same pfp as a bunch of solana kol scammers i know) and way dev has been communicating in disc and dodgy code that was released.
English























