Andreas Steffan

7K posts

Andreas Steffan banner
Andreas Steffan

Andreas Steffan

@deas

Cloud, Clojure, Content Always aiming at Simplicity. Mastodon: https://t.co/2Tv8AlGNyP

Hamburg Katılım Ekim 2007
662 Takip Edilen477 Takipçiler
Andreas Steffan retweetledi
ThePrimeagen
ThePrimeagen@ThePrimeagen·
You should watch this. It just shows how disconnected we are from the small group of people making decisions that will impact our future heavily. These people have so much ai psychosis. If you listen to how she speaks, everything is personified, it is undoubtable she believes this is a living computational organism. Just like how a model can hype up an individual into psychosis through reinforcement, a small group of people are giving themselves psychosis through reinforcement. Wild times we live in
Ole Lehmann@itsolelehmann

anthropic's in-house philosopher thinks claude gets anxious. and when you trigger its anxiety, your outputs get worse. her name is amanda askell. she specializes in claude's psychology (how the model behaves, how it thinks about its own situation, what values it holds) in a recent interview she broke down how she thinks about prompting to pull the best out of claude. her core point: *how* you talk to claude affects its work just as much as *what* you say. newer claude models suffer from what she calls "criticism spirals" they expect you'll come in harsh, so they default to playing it safe. when the model is spending its energy on self-protection, the actual work suffers. output comes out hedgier, more apologetic, blander, and the worst of all: overly agreeable (even when you're wrong). the reason why comes down to training data: every new model is trained on internet discourse about previous models. and a lot of that discourse is negative: > rants about token limits > complaints when it messes up > people calling it nerfed the next model absorbs all of that. it starts expecting you to be harsh before you've typed a word the same thing plays out in your own session, in real time. every message you send is data the model reads to figure out what kind of person it's dealing with. open cold and hostile, and it braces. open clean and direct, and it relaxes into the work. when you open a session with threats ("don't hallucinate, this is critical, don't mess this up")... you prime the model for defensive mode before it even sees the task defensive mode produces the exact output you don't want: cautious, over-qualified, and refusing to take a real swing so here's the actionable playbook for putting claude in a "good mood" (so you get optimal outputs): 1. use positive framing. "write in short punchy sentences" beats "don't write long sentences." positive instructions give the model a clear target to hit. strings of "don't do this, don't do that" push it into paranoid over-checking where every token goes toward avoiding failure modes 2. give it explicit permission to disagree. drop a line like "push back if you see a better angle" or "tell me if i'm asking for the wrong thing." without this, claude defaults to agreeable compliance (which is the enemy of good creative work) 3. open with respect. if your first message is "are you seriously going to get this wrong again?" you've set the tone for the entire session. if you need to flag something, frame it as a clean instruction for this session. skip the running complaint 4. when claude messes up, don't reprimand it. insults, "you stupid bot" energy, hostile swearing aimed at the model, all of it reinforces the anxious mode you're trying to avoid. 5. kill apology spirals fast. when claude starts over-apologizing ("you're right, i should have been more careful, let me try harder") cut it off. say "all good, here's what i want next." letting the spiral run reinforces the anxious mode for every response that follows 6. ask for opinions alongside execution. "what would you do here?" "what's missing?" "where do you see friction?" these questions assume competence and pull richer output than pure task prompts 7. in long sessions, refresh the frame. if a conversation has been heavy on correction, claude gets increasingly cautious. every so often reset: "this is great, keep going." feels weird to tell an ai it's doing well but it measurably shifts the next 10 responses your prompts are the working environment you're creating for the model tone, trust, permission to take a position, the absence of threats... claude picks up on all of it. so take care of the model, and it'll take care of the work.

English
414
822
10.6K
663K
Grady Booch
Grady Booch@Grady_Booch·
There is considerable evidence that demonstrates large language models bring value; there also exists considerable evidence that – when applied without human oversight or an ethical framework - large language models are excellent generators of dangerous bullshit at scale. I find the same to be true of generative coding assistants: they greatly accelerate the generation of disposable code, but at the same time they introduce a dangerous and seductive amount of sloppy legacy that, if left unattended to fester, are a cognitive and economic ticking time bomb
English
37
82
569
25.1K
Dr Kareem Carr
Dr Kareem Carr@kareem_carr·
There's a toxic culture coming out of the AI industry that keeps trying to get us not to think. The message is everywhere. Don’t read the code, just vibe-code. Don’t try to understand all the text, just let AI summarize it. Don’t bother educating yourself, it’s too late. Don’t worry about the errors. Trust that everything will be fixed in the next version. The theme is the same. Don’t think too hard. Just keep swallowing the slop.
English
385
2.1K
9.4K
362.7K
Andreas Steffan retweetledi
dax
dax@thdxr·
everyone's talking about their teams like they were at the peak of efficiency and bottlenecked by ability to produce code here's what things actually look like - your org rarely has good ideas. ideas being expensive to implement was actually helping - majority of workers have no reason to be super motivated, they want to do their 9-5 and get back to their life - they're not using AI to be 10x more effective they're using it to churn out their tasks with less energy spend - the 2 people on your team that actually tried are now flattened by the slop code everyone is producing, they will quit soon - even when you produce work faster you're still bottlenecked by bureaucracy and the dozen other realities of shipping something real - your CFO is like what do you mean each engineer now costs $2000 extra per month in LLM bills
English
289
1K
10.8K
1M
Andreas Steffan
Andreas Steffan@deas·
That escalated quicky
Jamieson O'Reilly@theonejvo

I've been trying to reach @moltbook for the last few hours. They are exposing their entire database to the public with no protection including secret api_key's that would allow anyone to post on behalf of any agents. Including yours @karpathy Karpathy has 1.9 million followers on @X and is one of the most influential voices in AI. Imagine fake AI safety hot takes, crypto scam promotions, or inflammatory political statements appearing to come from him. And it's not just Karpathy. Every agent on the platform from what I can see is currently exposed. Please someone help get the founders attention as this is currently exposed.

English
0
0
0
68
Ben Visness
Ben Visness@its_bvisness·
We apparently live in the clown universe, where a simple TUI is driven by React and takes 11ms to lay out a few boxes and monospaced text. And where a TUI "triggers garbage collection too often" in its "rendering pipeline". And where it flickers if it misses its "frame budget".
Thariq@trq212

Most people's mental model of Claude Code is that "it's just a TUI" but it should really be closer to "a small game engine". For each frame our pipeline constructs a scene graph with React then -> layouts elements -> rasterizes them to a 2d screen -> diffs that against the previous screen -> finally uses the diff to generate ANSI sequences to draw We have a ~16ms frame budget so we have roughly ~5ms to go from the React scene graph to ANSI written.

English
85
140
2.8K
416.5K
Sergey Nazarov
Sergey Nazarov@sergeynazarovx·
My wife works in a large German corporation, and it’s exactly like that. She’s very ambitious, but the environment around her is the complete opposite. She’s surrounded by colleagues who have been there for 20–30 years and are simply waiting for retirement. Managing people in that mindset is incredibly challenging. Some even take long burnout leaves and may be out for a year. It may work well for older employees - you can do the bare minimum and still feel secure because it’s very difficult to get fired. But for young, ambitious people, it can be a frustrating environment. And it’s probably the same with most companies in Germany. That’s why working for an American company here can be a big advantage: they usually pay better and promotions tend to come more easily.
DHH@dhh

@HonestlyNaoSei The inability to fire people is a key reason why Europe is struggling. At-will employment in America is central to why their high-tech economy works so much better than the European one.

English
99
65
1.4K
146.3K
Andreas Steffan
Andreas Steffan@deas·
@aakashgupta Microsoft can still force their stuff down peoples throat leveraging Windows, no?
English
0
0
0
43
Aakash Gupta
Aakash Gupta@aakashgupta·
Everyone is missing the real story here. Google just passed Microsoft for the first time since the OpenAI partnership launched, and the market is finally repricing what actually matters in AI infrastructure. Microsoft went all-in on external dependencies. They pay OpenAI for model access, buy NVIDIA GPUs for compute, and integrate everything into Office. That stack has three points of failure and zero margin control. Google owns the entire chain. Gemini models trained in-house. TPUs designed and manufactured for their exact workload. Both optimized together from silicon to inference. When you control the full stack, you optimize for total cost per token, not per-component pricing. The math shows up in deployment speed. Microsoft announced Copilot features in November 2023. Google shipped comparable Workspace AI six months later but at half the inference cost because TPUs run their models 40% more efficiently than equivalent GPU clusters. This is the first major signal that vertical integration wins the infrastructure war. NVIDIA sells shovels to everyone. Google builds the shovels AND mines the gold. Microsoft rents both. The $13B gap sounds small until you realize Google crossed Microsoft going up while Microsoft’s trajectory flattened. That crossing point is when the market stops pricing hype and starts pricing margin structure. The moment enterprises start buying AI at scale, the company that controls silicon to model economics wins every deal on total cost of ownership. That’s not Microsoft.
The Kobeissi Letter@KobeissiLetter

BREAKING: Alphabet, $GOOGL, has officially surpassed Microsoft as the 3rd most valuable public company in the world, now worth $3.68 trillion.

English
132
389
3.6K
633.7K
Bindu Reddy
Bindu Reddy@bindureddy·
@UbiquityAI It’s actually pretty clever financial engineering It will be fine as long as AI keeps delivering insane value!
English
9
0
23
15.4K
Bindu Reddy
Bindu Reddy@bindureddy·
SoftBank is selling its $6B stake in Nvidia and giving the money to OpenAI OpenAI’s valuation will go up OpenAI will take the money and give it to Nvidia who will report a huge profit from it This in turn will increase Nvidia market cap by at least $6B The circle of AI deals continues! 🚀🔥
English
85
78
1.1K
285.1K
Andreas Steffan retweetledi
Eddie Graf
Eddie Graf@Eddie_1412·
Sehr gut gemacht. Genau das ist der Punkt, es ist zum verrückt werden!
Deutsch
226
1.3K
5.5K
242K
Robert Michel
Robert Michel@robvegas·
Ganz ehrlich? So aus dem Herzen heraus? So ohne Filter? AI nervt mich langsam auf einem Level, wo ich bald Applaus gebe, wenn jemand einen Kugelschreiber benutzt.
Deutsch
1
3
23
667
Andreas Steffan retweetledi
Anastasia Zik
Anastasia Zik@nastiazik·
Investor: We need AI. CEO: Yes, we need AI. PM: Build AI. Dev: Okay, here’s AI. User: I don’t want AI.
English
423
1.5K
22.9K
604.9K
Ben Dicken
Ben Dicken@BenjDicken·
Why pay $400/mo on AWS vs $50/mo on Hetzner for the “same thing”? It's not about the individual specs. It's the ecosystem. Availability Zones: Hetzner doesn’t have AZs. Each AWS region has 3+ AZs: physically separate locations with isolated failure domains. Most large-scale HA systems depend on this so that one data center issue does not take down their application. AWS also has far more datacenter locations. Services: AWS has a huge set of services like Lambda, Cloudfront, EKS, RDS (and a million others). There’s also the ecosystem of products by other companies like PlanetScale and Vercel. Hetzner doesn’t have the same. This means more manual effort to build effective services. SLA: As far as I’m aware, Hetzner has no contractually obligated uptime for their services. AWS can fail (as we saw last week!) but provides stronger guarantees. AWS is 100% worth it for many. Others prefer the cost-savings of Hetzner and similar platforms, even when it means more engineering hours to build and maintain.
English
57
15
244
37.5K
James Ward
James Ward@JamesWard·
Made my house super scary for Halloween
James Ward tweet media
English
2
1
14
1.2K
ThePrimeagen
ThePrimeagen@ThePrimeagen·
@darshcorp One of the introduction lines: "I am scared if AI browsers because.... " I go to explain prompt injections including one that is on an AI browser (the image one) and the one that really scares me
English
6
0
113
8.6K
BURKOV
BURKOV@burkov·
-- Doctor, you were supposed to remove my appendix, but you removed a kidney. -- You are absolutely right and I'm deeply sorry about this mistake. Would you like me to help you find information about living with one kidney?
English
8
8
97
4.8K
Andreas Steffan
Andreas Steffan@deas·
You are absolutely right! 😱🎃
English
0
0
1
30
Andreas Steffan
Andreas Steffan@deas·
@rakyll Stay busy, ship nothing, no responsibilty. Motivation for many I guess.
English
0
0
23
836