Sergio Martínez
74K posts

Sergio Martínez
@SuperSerch
Java developer with a twist in security. DevOps & OpenStack enthusiast. OWASP member. Opinions my own.
47.743017, -86.934128 가입일 Mayıs 2009
551 팔로잉1.2K 팔로워
Sergio Martínez 리트윗함
Sergio Martínez 리트윗함

Claude Code is not AGI, but it is the single biggest advance in AI since the LLM.
But the thing is, Claude Code is NOT a pure LLM. And it’s not pure deep learning. Not even close.
And that changes everything.
The source code leak proves it. Tucked away at its center is a 3,167 line kernel called print.ts.
print.ts is a pattern matching. And pattern matching is supposed to be the *strength* of LLMs.
But Anthropic figured out that if you really need to get your patterns right, you can’t trust a pure LLM. They are too probabilistic. And too erratic.
Instead, the way Anthropic built that kernel is straight out of classical symbolic AI. For example, it is in large part a big IF-THEN conditional, with 486 branch points and 12 levels of nesting — all inside a deterministic, symbolic loop that the real godfathers of AI, people like John McCarthy and Marvin Minsky and Herb Simon, would have instantly recognized.*
Putting things differently, Anthropic, when push came to shove, went exactly where I long said the field needed to go (and where @geoffreyhinton said we didn’t need to go): to Neurosymbolic AI.
That’s right, the biggest advance since the LLM was neurosymbolic. AlphaFold, AlphaEvolve, AlphaProof, and AlphaGeometry are all neurosymbolic, too; so is Code Interpreter; when you are calling code, you are asking symbolic AI do an important part of the work.
Claude Code isn’t better because of scaling.
It’s better because Anthropic accepted the importance of using classical AI techniques alongside neural networks — precisely marriage I have long advocated.
It’s *massive* vindication for me (go see my 2019 debate with Bengio for context, or to my 2001 book, The Algebraic Mind), but it still ain’t perfect, or even close.
What we really need to do to get trustworthy AI rather than the current unpredictable “jagged” mess, is to go in the knowledge-, reasoning-, and world-model driven direction I laid out in 2020, in an article called the Next Decade in AI, in which neurosymbolic AI is just the *starting point* in a longer journey.*
Read that article if you want to know what else we need to do next.
The first part has already come to pass. In time, other three will, too.
Meanwhile, the implications for the allocation of capital are pretty massive: smartly adding in bits of symbolic AI can do a lot more than scaling alone, and even Anthropic as now discovered (though they won’t say) scaling is no longer the essence of innovation.
The paradigm has changed.
—
*Claude Code is plainly neurosymbolic but the code part is a mess; as Ernie Davis and I argued in Rebooting AI in 2019, we also need major advances in software engineering. But that’s a story for another day.
English
Sergio Martínez 리트윗함
Sergio Martínez 리트윗함
Sergio Martínez 리트윗함
Sergio Martínez 리트윗함

NEW POST
Modern hardware is fast, but software often fails to leverage it. Caer Sanders guides his work with mechanical sympathy. He distills this into principles: predictable memory access, awareness of cache lines, single-writer, natural batching
martinfowler.com/articles/mecha…
English
Sergio Martínez 리트윗함

🎙️🔴 Jetzt 𝐥𝐢𝐯𝐞: Die Pressekonferenz aus dem Bernabéu mit Vincent Kompany.
👉 youtube.com/live/NeZfrOS0K…

YouTube

Deutsch
Sergio Martínez 리트윗함

🚨 Holy shit… Deloitte was charged $1.6 million for a healthcare report filled with AI-hallucinated citations.
This is the second time in two months they’ve been caught.
First an Australian government agency. Now a Canadian province’s Department of Health.
And their response? They “stand by the conclusions.”
Let me translate that for you: “The AI made up the sources, but trust us, the advice is still good.”
That’s a $1.6 million report. For a healthcare system. With fake citations that nobody at Deloitte bothered to verify before submitting.
Not an intern’s draft.
The final deliverable.
The Australian incident was supposed to be a wake-up call.
Deloitte even partially refunded that government for the errors.
You’d think after publicly embarrassing themselves once, someone would have implemented a basic fact-checking step before hitting send on the next million-dollar engagement.
They didn’t.
And here’s what makes this story bigger than Deloitte.
Every major consulting firm is racing to integrate AI into their workflows. McKinsey, BCG, Bain, Accenture.
They’re all doing it. Because AI lets them produce reports faster with fewer junior analysts, which means higher margins on the same $500/hour billing rates.
But the entire consulting business model is built on one thing: trust. You’re paying for credibility.
You’re paying so that when you hand the report to your board or your minister, nobody questions the sources. The moment that trust breaks, the math changes completely.
Why pay $1.6 million for AI-generated analysis with fake citations when you could run the same prompts yourself for $20/month and at least know to check the sources?
That’s the real disruption nobody’s talking about. AI isn’t going to replace consulting firms by being smarter than them.
It’s going to replace them by revealing that a huge percentage of consulting work was always just expensive research and formatting.
And now the clients have access to the same tools.
Deloitte’s problem isn’t that they used AI. It’s that they used AI the way most people use AI: paste in a request, take the output at face value, ship it.
No verification layer.
No human review of citations.
No system.
The firms that survive this era won’t be the ones who use AI the fastest. They’ll be the ones who build actual verification systems around AI output. The ones who treat AI as a first draft, not a final product.
$1.6 million. Fake citations. Twice in two months. And they stand by the conclusions.
The consulting industry’s biggest threat isn’t AI.
It’s clients realizing they don’t need to pay someone else to hallucinate.

English
Sergio Martínez 리트윗함
Sergio Martínez 리트윗함

Estamos viviendo la era donde entras a Twitter, y un astronauta tuitea una foto desde el espacio mientras va camino a la luna.
What a moment to be alive ✨
Reid Wiseman@astro_reid
There are no words.
Español
Sergio Martínez 리트윗함


Sergio Martínez 리트윗함

Or better use your computer work just for work!
SwiftOnSecurity@SwiftOnSecurity
Or recommendations
English
Sergio Martínez 리트윗함
Sergio Martínez 리트윗함

Germany mandates all men ages 17-45 who want to leave Germany for longer than 3 months must now obtain a permit.
"Drastic change to conscription: Men who want to leave Germany for longer periods will need approval"
"All men over 17 and under 45 years old who want to leave Germany for longer than three months must obtain a permit from the Bundeswehr (German Armed Forces). It doesn't matter whether someone has planned a semester abroad, wants to take up a job abroad, or is planning a backpacking trip around the world: Above all, there is a mandatory visit to the Bundeswehr's career center.
Sascha 🍉|🇻🇦✝️🌹🕊️@Pasolinis_Asche
"Freiheit"
English
Sergio Martínez 리트윗함

Hahaha outlook inbox not working on earth OR in space.
Marcus House@MarcusHouse
Yes... In case anyone was wondering, Microsoft still sucks in space.
English
Sergio Martínez 리트윗함

Siempre tienen otros datos, jamás se hacen responsables.
Y aún así hay muchos que les siguen aplaudiendo y justificando 🤦🏻♀️🤦🏻♀️🤦🏻♀️🤦🏻♀️🤦🏻♀️🤦🏻♀️🤦🏻♀️🤦🏻♀️🤦🏻♀️🤦🏻♀️
Latinus@latinus_us
Gobierno de Sheinbaum arremete contra comité de la ONU: llama "tendencioso" al informe que pide investigar las desapariciones. #Latinus #InformaciónParaTi latinus.us/mexico/2026/4/…
Español




