Daniel Leach

960 posts

Daniel Leach banner
Daniel Leach

Daniel Leach

@daniel_leach_

Stream of consciousness

Sydney, New South Wales Katılım Nisan 2011
706 Takip Edilen130 Takipçiler
Daniel Leach retweetledi
Linear
Linear@linear·
Linear Releases + Linear Agent = Auto-generate release notes
Linear tweet media
English
6
9
278
31.6K
Daniel Leach retweetledi
“paula”
“paula”@paularambles·
they call them crisps there
“paula” tweet media
English
176
1.4K
22K
568K
Daniel Leach retweetledi
kache
kache@yacineMTB·
codex just saved me more in my tax return than i could spend on it in a year
English
72
19
2.6K
227.1K
Daniel Leach retweetledi
nic
nic@nicdunz·
/goal in codex is literally agi
English
59
42
1.8K
199.1K
Daniel Leach retweetledi
signüll
signüll@signulll·
holy shit gcp’s growth is insane to watch. +63% y/y. it’s now entirely plausible that google cloud becomes on par with search in revenue, if not bigger. we use gemini heavily because the cost/quality ratio has been absurd for a lot of tasks. our stack is model agnostic & every model can be swapped out, including the system prompts but for many workloads gemini is just the obvious choice. for the more nuanced stuff, especially personality & voice, our router still pulls in claude or gpt paired with elevenlabs. as a unapologetic wrapper company, we love that the stuff we depend on gets radically cheaper & better over time. interesting that google spent its entire existence assembling the pieces for gcp struggling as the third player but gemini is when they all finally snapped into place.
English
28
23
700
52.6K
Daniel Leach
Daniel Leach@daniel_leach_·
@zeeg Do you believe that llms will never be able to produce code without requiring human review?
English
0
0
0
13
David Cramer
David Cramer@zeeg·
imagine not having expertise in software and trusting (literally) anything these agents output
English
119
34
919
46K
Tibo
Tibo@thsottiaux·
It’s the little things that matter, what are some small papercuts you have noticed in Codex? We’ll fix as many as possible in the next week.
English
2K
57
2.3K
265.2K
Daniel Leach retweetledi
Gabriel Chua
Gabriel Chua@gabrielchua·
Hello Sydney 👋 🇦🇺 GPT-5.5 is here, and the OpenAI Codex & Startup teams are bringing the vibes, tokens, and builder energy to town next week. Catch us here: > OpenAI x January Capital x Lyra x Relevance AI Hackathon [luma.com/aq1yr5vc] > Builder session with Sydney Computing Society - SYNCS/ University of Sydney > Vercel x OpenAI Builder Day [lnkd.in/gU5FH_nX] > Coffee, Coworking and OpenAI Codex with Build Club [luma.com/1hi8t8kw] > OpenAI Codex Hackathon - Sydney, co-hosted with University of Technology Sydney, UTS Startups and Arafat Tehsin, Codex Ambassador for Sydney [luma.com/or8icykr] > Going Global: AI Founders Lunch with OpenAI, Crane Venture Partners and Liminal [luma.com/i5j5egfp] > January Capital x Accel x OpenAI x Airwallex Pitch Day [luma.com/hdw4cp7j] I’ll also be in Melbourne on 30 April for our Codex Community Meetup, hosted by Dr Sam D., Codex Ambassador for Melbourne, and the crew at MLAI: luma.com/yhc5wr8h I also heard Thomas Jeng will be recording an episode of the Startup 360 (Startup Daily) podcast with Simon Thomsen Most of all, I’m excited to finally meet my wonderful teammates in Australia IRL 💙 PS: Some of the events are RSVP-only or registrations have closed, but happy to chat if you’re building something really cool with Codex, GPT-5.5, or GPT Image 2.
Gabriel Chua tweet media
English
40
25
319
20.3K
Daniel Leach retweetledi
Paras Chopra
Paras Chopra@paraschopra·
AI bois be like:
Paras Chopra tweet media
English
124
549
7.5K
294.6K
Polymarket
Polymarket@Polymarket·
JUST IN: Microsoft commits to A$25,000,000,000.00 investment in Australia to build AI infrastructure & train workers.
English
213
125
1.9K
681K
Daniel Leach retweetledi
roon
roon@tszzl·
say it with me now. experts are fake, smart generalists rule the world, everything is designed by people no smarter than you, and courage is in shorter supply than genius
English
118
1.4K
10.2K
0
Daniel Leach
Daniel Leach@daniel_leach_·
@thsottiaux How do I get the agent to interact with the in app browser? It always seems to want to use playwright outside of the Mac app
English
0
0
0
15
Tibo
Tibo@thsottiaux·
Hello builders. What are we getting wrong with Codex, what can we improve?
English
2.4K
64
2.9K
325.7K
ThePrimeagen
ThePrimeagen@ThePrimeagen·
You should watch this. It just shows how disconnected we are from the small group of people making decisions that will impact our future heavily. These people have so much ai psychosis. If you listen to how she speaks, everything is personified, it is undoubtable she believes this is a living computational organism. Just like how a model can hype up an individual into psychosis through reinforcement, a small group of people are giving themselves psychosis through reinforcement. Wild times we live in
Ole Lehmann@itsolelehmann

anthropic's in-house philosopher thinks claude gets anxious. and when you trigger its anxiety, your outputs get worse. her name is amanda askell. she specializes in claude's psychology (how the model behaves, how it thinks about its own situation, what values it holds) in a recent interview she broke down how she thinks about prompting to pull the best out of claude. her core point: *how* you talk to claude affects its work just as much as *what* you say. newer claude models suffer from what she calls "criticism spirals" they expect you'll come in harsh, so they default to playing it safe. when the model is spending its energy on self-protection, the actual work suffers. output comes out hedgier, more apologetic, blander, and the worst of all: overly agreeable (even when you're wrong). the reason why comes down to training data: every new model is trained on internet discourse about previous models. and a lot of that discourse is negative: > rants about token limits > complaints when it messes up > people calling it nerfed the next model absorbs all of that. it starts expecting you to be harsh before you've typed a word the same thing plays out in your own session, in real time. every message you send is data the model reads to figure out what kind of person it's dealing with. open cold and hostile, and it braces. open clean and direct, and it relaxes into the work. when you open a session with threats ("don't hallucinate, this is critical, don't mess this up")... you prime the model for defensive mode before it even sees the task defensive mode produces the exact output you don't want: cautious, over-qualified, and refusing to take a real swing so here's the actionable playbook for putting claude in a "good mood" (so you get optimal outputs): 1. use positive framing. "write in short punchy sentences" beats "don't write long sentences." positive instructions give the model a clear target to hit. strings of "don't do this, don't do that" push it into paranoid over-checking where every token goes toward avoiding failure modes 2. give it explicit permission to disagree. drop a line like "push back if you see a better angle" or "tell me if i'm asking for the wrong thing." without this, claude defaults to agreeable compliance (which is the enemy of good creative work) 3. open with respect. if your first message is "are you seriously going to get this wrong again?" you've set the tone for the entire session. if you need to flag something, frame it as a clean instruction for this session. skip the running complaint 4. when claude messes up, don't reprimand it. insults, "you stupid bot" energy, hostile swearing aimed at the model, all of it reinforces the anxious mode you're trying to avoid. 5. kill apology spirals fast. when claude starts over-apologizing ("you're right, i should have been more careful, let me try harder") cut it off. say "all good, here's what i want next." letting the spiral run reinforces the anxious mode for every response that follows 6. ask for opinions alongside execution. "what would you do here?" "what's missing?" "where do you see friction?" these questions assume competence and pull richer output than pure task prompts 7. in long sessions, refresh the frame. if a conversation has been heavy on correction, claude gets increasingly cautious. every so often reset: "this is great, keep going." feels weird to tell an ai it's doing well but it measurably shifts the next 10 responses your prompts are the working environment you're creating for the model tone, trust, permission to take a position, the absence of threats... claude picks up on all of it. so take care of the model, and it'll take care of the work.

English
415
822
10.6K
662.7K
Daniel Leach
Daniel Leach@daniel_leach_·
@zeeg Whether I still need a logging solution (Axiom)
English
0
0
0
80
David Cramer
David Cramer@zeeg·
If you're new to Sentry (especially if you're new to software dev!) what's confusing to you? The product hasd a lot going on obviously, but we're looking for opportunities to simplify the experience where possible.
English
17
0
30
7.3K
shadcn
shadcn@shadcn·
I need Chat in Codex. Codex UI + ChatGPT.
English
112
20
1.2K
285.8K
OpenAI Developers
OpenAI Developers@OpenAIDevs·
Last week, we released a preview of memories in Codex. Today, we’re expanding the experiment with Chronicle, which improves memories using recent screen context. Now, Codex can help with what you’ve been working on without you restating context.
English
224
367
4.5K
1.2M