Stefan Wirth

55.1K posts

Stefan Wirth banner
Stefan Wirth

Stefan Wirth

@NafetsWirth

Cereal Entrepreneur cooking the future of oatmeal Cooked: 🛠️ https://t.co/HoLLES37Bp Sharing: 🎥 https://t.co/DcknENMr3c ✍️ https://t.co/4sodInhuwB Click👇

Monologues about AI ➡️ 가입일 Mart 2014
2.3K 팔로잉7.8K 팔로워
고정된 트윗
Stefan Wirth
Stefan Wirth@NafetsWirth·
Who wants to invest?
Stefan Wirth tweet media
English
1
0
6
478
Miguel de Icaza ᯅ🍉
Miguel de Icaza ᯅ🍉@migueldeicaza·
Friends, I am in Amsterdam next week. Please send food recommendations - what’s the absolute meal I shouldn’t miss?
English
40
0
31
8.5K
𝗠𝗶𝗰𝗵𝗮𝗲𝗹 𝗞𝗼𝘃𝗲
I cannot prove this but my gut feeling... This is part of the marketing hype. Whoever is advising them, they know exactly what they are doing and they know exactly the reaction this will get. In public. Making Anthropic trending topic. Why I think of it? Because how curated the production is, how this isn't just a random "blog post" but carefully engineered social media campaign. But she believes she's doing the "right thing" tho.
English
4
0
25
723
ThePrimeagen
ThePrimeagen@ThePrimeagen·
You should watch this. It just shows how disconnected we are from the small group of people making decisions that will impact our future heavily. These people have so much ai psychosis. If you listen to how she speaks, everything is personified, it is undoubtable she believes this is a living computational organism. Just like how a model can hype up an individual into psychosis through reinforcement, a small group of people are giving themselves psychosis through reinforcement. Wild times we live in
Ole Lehmann@itsolelehmann

anthropic's in-house philosopher thinks claude gets anxious. and when you trigger its anxiety, your outputs get worse. her name is amanda askell. she specializes in claude's psychology (how the model behaves, how it thinks about its own situation, what values it holds) in a recent interview she broke down how she thinks about prompting to pull the best out of claude. her core point: *how* you talk to claude affects its work just as much as *what* you say. newer claude models suffer from what she calls "criticism spirals" they expect you'll come in harsh, so they default to playing it safe. when the model is spending its energy on self-protection, the actual work suffers. output comes out hedgier, more apologetic, blander, and the worst of all: overly agreeable (even when you're wrong). the reason why comes down to training data: every new model is trained on internet discourse about previous models. and a lot of that discourse is negative: > rants about token limits > complaints when it messes up > people calling it nerfed the next model absorbs all of that. it starts expecting you to be harsh before you've typed a word the same thing plays out in your own session, in real time. every message you send is data the model reads to figure out what kind of person it's dealing with. open cold and hostile, and it braces. open clean and direct, and it relaxes into the work. when you open a session with threats ("don't hallucinate, this is critical, don't mess this up")... you prime the model for defensive mode before it even sees the task defensive mode produces the exact output you don't want: cautious, over-qualified, and refusing to take a real swing so here's the actionable playbook for putting claude in a "good mood" (so you get optimal outputs): 1. use positive framing. "write in short punchy sentences" beats "don't write long sentences." positive instructions give the model a clear target to hit. strings of "don't do this, don't do that" push it into paranoid over-checking where every token goes toward avoiding failure modes 2. give it explicit permission to disagree. drop a line like "push back if you see a better angle" or "tell me if i'm asking for the wrong thing." without this, claude defaults to agreeable compliance (which is the enemy of good creative work) 3. open with respect. if your first message is "are you seriously going to get this wrong again?" you've set the tone for the entire session. if you need to flag something, frame it as a clean instruction for this session. skip the running complaint 4. when claude messes up, don't reprimand it. insults, "you stupid bot" energy, hostile swearing aimed at the model, all of it reinforces the anxious mode you're trying to avoid. 5. kill apology spirals fast. when claude starts over-apologizing ("you're right, i should have been more careful, let me try harder") cut it off. say "all good, here's what i want next." letting the spiral run reinforces the anxious mode for every response that follows 6. ask for opinions alongside execution. "what would you do here?" "what's missing?" "where do you see friction?" these questions assume competence and pull richer output than pure task prompts 7. in long sessions, refresh the frame. if a conversation has been heavy on correction, claude gets increasingly cautious. every so often reset: "this is great, keep going." feels weird to tell an ai it's doing well but it measurably shifts the next 10 responses your prompts are the working environment you're creating for the model tone, trust, permission to take a position, the absence of threats... claude picks up on all of it. so take care of the model, and it'll take care of the work.

English
352
704
9.1K
530.2K
Klaas
Klaas@forgebitz·
people are finally figuring out that almost every viral launch you see on here is fake all fake engagement i've seen competitors do it, and there are zero consequences for doing it it's always the same; fake mrr numbers, fake engagement, founders forbes 30 under 30
English
65
13
509
18.8K
mert
mert@mert·
i havent laughed at a video this hard for years lmao
English
149
526
7.6K
729.9K
Maurice Kleine 🍄
Maurice Kleine 🍄@mauricekleine·
how is a nextjs app on vercel with the whole stack in the same region still so laggy? my tanstack start app with a hono backend on AWS and a postgres database absolutely FLIES there are ZERO skeletons or spinners in the tanstack app... ...on a throttled 4G network no server components no cache components no opaque non-standard APIs no security mistakes that LLMs make all the time (auth only in layout.tsx and not in child page.tsx? it's safe bro!! (it's NOT)) just react and oRPC routers boggles the mind
English
3
0
4
274
CCP IS ASSHOE
CCP IS ASSHOE@CCPISASSH0E·
PSA to my friends in the @X community
English
47
348
3.3K
337.2K
Dan ⚡️
Dan ⚡️@d4m1n·
here are all the Anthropic launches in the past 24h 🤯 HOURS, not weeks April 16: Claude Opus 4.7 April 17: Claude Design April 17: Claude for Excel April 17: Claude Infinite Context Length April 17: Claude Health & Fitness April 17: Claude for Windows April 17: Claude Legal April 17: Claude Home April 17: Claude Generative Gaming April 17: Claude Mail April 17: Claude for Photoshop April 17: Claude WiFi April 17: Claude Running April 17: Claude Watch April 17: Claude Money April 17: Claude 4D Chess April 17: Claude Cycle Tracking April 17: Claude Couples
English
138
169
2.1K
303K
Claude
Claude@claudeai·
Now in research preview: routines in Claude Code. Configure a routine once (a prompt, a repo, and your connectors), and it can run on a schedule, from an API call, or in response to an event. Routines run on our web infrastructure, so you don't have to keep your laptop open.
Claude tweet media
English
746
1.5K
18.5K
4.5M
Dan Shipper 📧
Dan Shipper 📧@danshipper·
Software engineering in 2026 needs two roles: A pirate and an architect. The pirate codes as fast as possible to figure out what's valuable. The architect turns that sloppy mess into a well-oiled machine. Here's how it works and why:
English
42
64
694
126.2K
Alber 🍑/acc
Alber 🍑/acc@alberduris·
Estoy realmente cansado de la sicofancia de Opus 4.6. Esto también es una gran parte de la degradación percibida. ∴ voy a dar mi opinión honesta
Alber 🍑/acc tweet mediaAlber 🍑/acc tweet media
Español
1
0
1
248