
AI Survivalist
1.3K posts

AI Survivalist
@SurviveWithAI
Adapt. Survive. Thrive. In the age of AI.


The Trump administration is discussing the creation of an AI working group that could establish a government review process for new AI models before public release, following growing cybersecurity concerns around increasingly capable systems like Anthropic's Mythos. White House officials briefed executives from Anthropic, Google, and OpenAI on the plans last week, though the proposals remain in early stages and no executive order has been confirmed. Via NYT


If you don't understand this, you will not understand why LLM-based agents are irreparably failing for a general-purpose problem solving. An agent (by the way it was the topic of my PhD 20 years ago) to be useful, must be rational. Being rational means to always prefer an outcome that results in the maximal expected utility to its master/user. Let’s say an agent has two actions they can execute in an environment: a_1 and a_2. If the agent can predict that a_1 gives its user an expected utility of 10, and a_2 gives an expected utility of -100, then a rational agent must choose a_1 even if choosing a_2 seems like a better option when explained in words. The numbers 10 and -100 can be obtained by summing the products of all possible outcomes for each action and their likelihoods. Now here is the problem with LLM-based agents. The LLM is not optimizing expected utility in the environment. It is optimizing the next token, conditioned on a prompt, a context window, and a training distribution full of examples of what helpful answers are supposed to look like. Those are not the same objective. So when we wrap an LLM in a loop and call it an “agent,” we have not created a rational decision-maker. We have created a text generator that can imitate the surface form of deliberation. It may say things like: “I should compare the expected outcomes.” “The best action is probably a_1.” “I will now execute the optimal plan.” But the internal mechanism is not selecting actions by maximizing the user’s expected utility. It is generating a continuation that is statistically appropriate given the prompt and prior context. This distinction matters enormously. For narrow tasks, the imitation can be good enough. If the environment is constrained, the actions are simple, and the success criteria are close to patterns seen in training, the system can appear agentic. But for general-purpose problem solving, the gap becomes fatal. A rational agent needs stable preferences, calibrated beliefs, causal models of the world, the ability to evaluate consequences, and the discipline to choose the action with maximal expected utility even when that action is boring, non-linguistic, or unlike the examples in its training data. An LLM-based agent has none of that by default. It has fluency. It has pattern completion. It has a remarkable ability to compress and recombine human text. But fluency is not rationality, and a plausible plan is not an expected-utility calculation. This is why these systems so often fail in strange, brittle, and irreparable ways when given open-ended responsibility. They are not failing because the prompts are insufficiently clever. They are failing because we are asking a simulator of rational agency to be a rational agent.



THE JOB MARKET IS ABOUT TO GET WEIRD. And most people are not prepared for what is coming. Companies in 2026 are not looking for data scientists. They are not looking for ML engineers. They are not looking for people who can build models from scratch. THEY ARE LOOKING FOR AI NERDS. The person who walks into a meeting, sees a 4 hour manual process, and kills it in 10 minutes with Claude Code and LLMs. The person who refuses to do anything manually twice. The person who looks at every repetitive task and asks one question: Why is a human still doing this. That mindset is worth more right now than a machine learning PhD. More than five years of Python experience. More than any certification from any university. THE NEW VALUABLE SKILL IS NOT TECHNICAL. It is a refusal to accept inefficiency. The people who develop that refusal this year will be completely unemployable in the old way and completely irreplaceable in the new one. Which side of that line are you on.


I hate to say it but OpenAI is so back. ChatGPT Pro is good.





Customize your Codex pet with /hatch


The President signed an Executive Order to strengthen efficiency and accountability in federal contracting. At NASA, every dollar matters as we return Americans to the Moon and we won’t tolerate inefficient use of taxpayer resources, waste, fraud, or abuse that stands in the way of the mission.









