Andrés Meza-Escallón
728 posts

Andrés Meza-Escallón
@SoftwareShaper
Systems Engineer/CS, Master in Communication, 30+ years leading web software development, machine learning/AI to drive innovation. SQL,Python,Pandas,LangChain.
Jamundí, Colombia Katılım Ocak 2011
68 Takip Edilen94 Takipçiler

Andrés Meza-Escallón retweetledi
Andrés Meza-Escallón retweetledi

If this Karpathy interview doesn't pop the ai bubble,
nothing will.
10 brutal quotes:
1. LLMs don’t work yet
They don’t have enough intelligence, they’re not multimodal enough, they can’t use computers, and they don’t remember what you tell them.
They’re cognitively lacking. It’ll take about a decade to work through all of that.
2. When you boot them up, they always start from zero
They have no distillation phase, no process like sleep where what happened gets analyzed and written back into the weights.
3. What’s stored in their weights is only a hazy recollection of the internet
It's just a compressed blur of 15 trillion tokens squeezed into a few billion parameters. Their context window is just short-term working memory.
4. They’re good at imitation, terrible at going off the data manifold
Too much memory, not enough reasoning.
We need to strip away the memorized knowledge and keep the cognitive core: the algorithms, the magic of intelligence, problem-solving, strategy.
5. We’ve probably recreated a cortical tissue, pattern-learning and general, but we’re still missing the rest of the brain
No hippocampus for memory.
No amygdala for instincts.
No emotions or motivations.
6. They memorize perfectly but generalize poorly
If you give them random numbers, they can recite them back. No human can do that.
That’s the problem: humans forget just enough to be forced to find patterns.
7. Anything truly new, code that’s never been written before, ideas that have no template; they stumble
They’re still autocomplete engines with perfect recall and no understanding. Until we find that cognitive core, intelligence stripped of memory but full of reasoning, they’ll stay brilliant mimics, not minds.
English

The tactical layer of development is becoming commoditized.
But the strategic layer — that’s still very much a human game.
— Craig Adam
#ai #SoftwareDevelopment

English

Assumptions that aren’t based
on well-established facts
are the bane of all projects.
—Andrew Hunt & David Thomas,
The Pragmatic Programmer
@SoftwareDevelopment #SoftwareEngineering

English
Andrés Meza-Escallón retweetledi

You need to sometimes think much harder.
Now there is human judgment, evidence and AI thinking.
AI is a very good complementary tool that helps us with some of the deep thinking.
— Itamar Gilad
#SoftwareDevelopment #ProductDesign #UIUX

English


What’s a task that’s too big?
Any task that requires “fortune telling”
— Andrew Hunt & David Thomas
The Pragmatic #Programmer
#SoftwareDevelopment #SoftwareEngineering

English

#Agile planning depended, to a significant degree,
on decomposing work into small enough pieces
that we could complete our features
within a single sprint, or iteration.
— Dave Farley
#SoftwareEngineering #SoftwareDevelopment

English

A thing is well designed if it adapts to the people who use it.
— Andrew Hunt & David Thomas,
The Pragmatic Programmer
#SoftwareDevelopment #SoftwareEngineering

English

Instead of wasting effort
designing for an uncertain future,
you can always fall back on
designing your code to be replaceable.
— Andrew Hunt & David Thomas,
The Pragmatic #Programmer
#SoftwareDevelopment #SoftwareEngineering

English
Andrés Meza-Escallón retweetledi

Stop the nonsense.
Don't build an AI Agent unless there's no other alternative.
For most applications, you'll be better off:
1. With a simple rule-based system
2. Upgrade to static models when complexity increases
3. Implement simple, well-prompted LLM calls
4. Build an LLM-based workflow with predefined code paths
Then, and only then, AI agents might be a good alternative.
Every single option on the above list is easier to implement and more maintainable than an agent.
English











