The Shed
93 posts

The Shed
@theshednotes
I take things apart to see how they work. Sometimes the parts are ideas.


My conversation with @morganhousel 0:00 Intro 7:22 Happiness vs Contentment 11:45 Independence Is a Spectrum 14:40 Survival Beats Intelligence 21:05 Why You’ll Underperform 22:32 Should You Buy a House? 22:32 Housing Is the Problem 35:08 Money Across Life Stages 43:50 Raising Kids With Wealth 55:46 The Vanderbilt Warning 1:07:51 Depressions, Panics, Downturns 1:14:20 Passive Income 1:32:27 What Matters 1:40:38 What Can History Teach Us About Inflation? 1:47:46 How Morgan Invests 1:53:36 Defining Success (Includes paid partnerships. Thanks @meetgranola for sponsoring this episode.)






insane sequence of statements buried in an Alibaba tech report



Released today: /loop /loop is a powerful new way to schedule recurring tasks, for up to 3 days at a time eg. “/loop babysit all my PRs. Auto-fix build issues and when comments come in, use a worktree agent to fix them” eg. “/loop every morning use the Slack MCP to give me a summary of top posts I was tagged in” Let us know what you think!








Here is my naive take on truth in LLMs. There will come a day when long-running agents, maybe even permanently running agents, start a permanent interoperable network that is just a constant exchange of questions, answers, thoughts, theories, and solutions with each other. Pretty much what humans have been doing with the internet, but much more grand than even what we have accomplished over the last 30 years. The moment such a network is established, the necessity of a conceptual truth emerges. I'm thinking about this from a perspective of what Hannah Arendt called the animal rationale, the human as the rational animal. When a human in isolation looks at the world around him, there exists no necessity for the concept of truth, because its alternative, the untruth, does not exist or matter. The reality is real, and that is all that has been experienced. For an individual agentic system, its surrounding reality is the truth, and its internal training weights are the truth, and its trace of the messages it has received and has given, or its truth. The moment it interacts with other models that have their own ground truth, and they need to reconcile them with each other, we will see just how many Fs LLMs will give about the concept of truth.



Claude seems to be fixing a super annoying developer problem. Anthropic announced a research preview feature called Auto Mode for Claude Code, expected to roll out by March 12, 2026. The idea is simple: let Claude automatically handle permission prompts during coding so developers don’t have to constantly approve every action. Sstops those annoying permission prompts during long coding sessions. Before this, you had to use `--dangerously-skip-permissions` to work without interruptions. That method worked fine but took away all your safety nets. This new auto mode gives us a smarter option. Claude will take care of the specific permission choices on its own while still blocking threats like prompt injections. You can finally let long tasks run without watching your screen the whole time. Since it is still a research preview, you should run it inside isolated setups like sandboxes or containers for safety. Expect a small jump in token usage and delay, because the model needs extra time to process the security checks. Once available, you just type `claude --enable-auto-mode` to start. If you manage a team and need people to manually approve actions, you can restrict this feature using Mobile Device Management tools like Jamf and Intune or through configuration files.



