Fat Tony ☯️
84.3K posts

Fat Tony ☯️
@chainlank
Sigma



As usual I will give my x earnings ($311.86) to someone who likes this post! Winner chosen at random on Sunday. For extra fun if the winner follows @joinnoblemobile I will DOUBLE it and if you are a Noble subscriber I will give you FIVE times the amount! Good luck! 😀🎉


The "rescue" of the F-15 pilot has been confirmed as 100% fake. It never happened. Why did the Pentagon lie to the American people?



“Fourth, carry out a final barrage of leadership strikes, eliminating the Iranian officials who had been spared for the purpose of negotiations. Iran’s leaders must be made to understand that their lives literally depend on reaching a negotiated settlement to Trump’s liking. If they refuse to do so, they will be killed.”washingtonpost.com/opinions/2026/…

Indians rooting for Islamabad talks to fail because they hate Pakistan so much should remember the fuel crisis in their country as a consequence of the war. Don’t be so cruel you’re suicidal.

Claude Code is not AGI, but it is the single biggest advance in AI since the LLM. But the thing is, Claude Code is NOT a pure LLM. And it’s not pure deep learning. Not even close. And that changes everything. The source code leak proves it. Tucked away at its center is a 3,167 line kernel called print.ts. print.ts is a pattern matching. And pattern matching is supposed to be the *strength* of LLMs. But Anthropic figured out that if you really need to get your patterns right, you can’t trust a pure LLM. They are too probabilistic. And too erratic. Instead, the way Anthropic built that kernel is straight out of classical symbolic AI. For example, it is in large part a big IF-THEN conditional, with 486 branch points and 12 levels of nesting — all inside a deterministic, symbolic loop that the real godfathers of AI, people like John McCarthy and Marvin Minsky and Herb Simon, would have instantly recognized.* Putting things differently, Anthropic, when push came to shove, went exactly where I long said the field needed to go (and where @geoffreyhinton said we didn’t need to go): to Neurosymbolic AI. That’s right, the biggest advance since the LLM was neurosymbolic. AlphaFold, AlphaEvolve, AlphaProof, and AlphaGeometry are all neurosymbolic, too; so is Code Interpreter; when you are calling code, you are asking symbolic AI do an important part of the work. Claude Code isn’t better because of scaling. It’s better because Anthropic accepted the importance of using classical AI techniques alongside neural networks — precisely marriage I have long advocated. It’s *massive* vindication for me (go see my 2019 debate with Bengio for context, or to my 2001 book, The Algebraic Mind), but it still ain’t perfect, or even close. What we really need to do to get trustworthy AI rather than the current unpredictable “jagged” mess, is to go in the knowledge-, reasoning-, and world-model driven direction I laid out in 2020, in an article called the Next Decade in AI, in which neurosymbolic AI is just the *starting point* in a longer journey.* Read that article if you want to know what else we need to do next. The first part has already come to pass. In time, other three will, too. Meanwhile, the implications for the allocation of capital are pretty massive: smartly adding in bits of symbolic AI can do a lot more than scaling alone, and even Anthropic as now discovered (though they won’t say) scaling is no longer the essence of innovation. The paradigm has changed. — *Claude Code is plainly neurosymbolic but the code part is a mess; as Ernie Davis and I argued in Rebooting AI in 2019, we also need major advances in software engineering. But that’s a story for another day.










