Andrés Meza-Escallón

69.3K posts

Andrés Meza-Escallón banner
Andrés Meza-Escallón

Andrés Meza-Escallón

@ApoloDuvalis

Software engineer. Master degree in Communication. English/Spanish. Science, AI, programming, renewable energy, science-fiction and space.

Cali, Colombia Katılım Nisan 2008
402 Takip Edilen1.4K Takipçiler
Sabitlenmiş Tweet
Andrés Meza-Escallón
Andrés Meza-Escallón@ApoloDuvalis·
¿La #IA es gratis? ¿Nos quitará todos los trabajos? ¿Es solo una moda pasajera? En este video desmontamos 6 mitos comunes sobre la Inteligencia Artificial que seguramente has escuchado. youtube.com/watch?v=ZA_vqz…
YouTube video
YouTube
Español
0
0
0
95
Andrés Meza-Escallón retweetledi
Aakash Gupta
Aakash Gupta@aakashgupta·
Your brain doesn't form the thought until you write it down. Nature Reviews Bioengineering published the case for that claim last summer in an editorial titled "Writing is thinking." The cited evidence is a 2024 EEG study at the Norwegian University of Science and Technology. 36 students alternated between handwriting and typing the same words. 256-channel sensor array. Cursive on a touchscreen versus keys on a keyboard. Same words both ways. Handwriting produced widespread connectivity across parietal and central brain regions. Typing didn't. The theta and alpha frequency bands the literature ties to memory formation and encoding lit up almost exclusively when the hand was forming the letters. The motor act was producing the cognition. What the editorial extends from that finding is the more uncomfortable claim. Writing a scientific article is the mechanism by which a researcher discovers what their main message actually is. The act of constructing sentences forces the chaotic, non-linear way the mind wanders into a structured, intentional narrative. You sort years of research into a story, and in the sorting, you find out what you believe. Then the line: If writing is thinking, are we not then reading the thoughts of the LLM rather than those of the researchers behind the paper? Nature endorses LLMs for grammar, search, brainstorming, breaking through writer's block. Where the line gets drawn is outsourcing the whole writing process. Because the writing process is the thinking process. Even editing the LLM's draft is harder than writing one from scratch. To restructure someone else's reasoning you have to reconstruct it first, which means doing the cognitive work anyway, with worse leverage and more friction. The time savings on the keyboard turn out to be cognitive savings on the part of the brain you wanted to use. Your first draft was the thinking.
Aakash Gupta tweet media
English
63
591
2.5K
240.9K
Andrés Meza-Escallón retweetledi
Grady Booch
Grady Booch@Grady_Booch·
Having been part of the industry for 50 years, I can confidently report that none of this is true. Sure, writing code has a non-zero cost; this is true of any artifact. But you know what costs even more, Jonathan? Writing bad code; writing unnecessary code; writing more code than you really need simply because you think you might need it someday or you are too lazy or sloppy to clean up after yourself. Anything that costs nothing is often worth nothing as well, and results in significant unintended consequences.
Jonathan Ross@JonathanRoss321

For 50 years, software engineering ran on code rationing. Writing code was expensive, so we rationed it carefully through roadmaps, RFCs, prioritization meetings, and scope reviews. This created a role: the No Engineer. No, that won't scale. No, we don't have bandwidth. No, that's out of scope. No, we need a design doc first. The No Engineer was valuable for 50 years. Every "no" saved real money. Their judgment was the rationing system. LLMs will be the end of code rationing. Code is cheap now. And while the No Engineer is explaining why something can't be done, the Yes Engineer has already shipped three versions of it. If you're a Yes Engineer, the next decade is yours.

English
87
240
2.3K
131K
Andrés Meza-Escallón retweetledi
Aakash Gupta
Aakash Gupta@aakashgupta·
This is an absolute masterclass from MIT on how to speak
English
92
3.2K
16.2K
2.9M
Andrés Meza-Escallón retweetledi
Yanco
Yanco@the_yanco·
@AISafetyMemes this meme happening in real time:
Yanco tweet media
English
0
1
4
76
Andrés Meza-Escallón retweetledi
AI Notkilleveryoneism Memes ⏸️
@sama Anthropic wasn't being dishonest, the ad is clearly a joke And it wasn't long ago you were saying you weren't gonna run ads at all, so people naturally wonder where this dark path with fucked up incentives ends up x.com/tomwarren/stat…
Tom Warren@tomwarren

Anthropic just took a big swipe at OpenAI's decision to put ads in ChatGPT. Anthropic is airing ads mocking ChatGPT ads during the Super Bowl, and they're hilarious 😅 Anthropic is also committing to no ads in Claude theverge.com/ai-artificial-…

English
8
12
602
40.2K
Andrés Meza-Escallón retweetledi
Enséñame de Ciencia
Enséñame de Ciencia@EnsedeCiencia·
Metáfora sobre deshumanización y el sinsentido del tiempo en que vivimos…
Español
10
500
1.8K
113K
Andrés Meza-Escallón retweetledi
Andrés
Andrés@aramh4ck·
Career Path para ser IA Engineer
Andrés tweet media
Português
0
1
2
386
Andrés Meza-Escallón retweetledi
SrGrafo
SrGrafo@SrGrafo2·
Once I saw a guy write it wrong and fix it without restarting from the beginning. It was surreal
SrGrafo tweet media
English
5
26
654
7.5K
Andrés Meza-Escallón retweetledi
Santiago
Santiago@svpino·
I don't believe you can vibe-code complex applications. Put another way, your ability to vibe-code software is inversely proportional to its complexity. Vibe-coding is like going into the "Minotaur Labyrinth" without Theseus' thread: you'll invariably get lost. More accurately, you'll be stuck in an endless loop, going in circles with the model: • Things that used to work stop working • Your code starts bloating • Solutions become over-engineered And all of this will happen while working with popular tech, where models have plenty of training. Try to step outside the beaten path (use anything a bit more obscure), and models won't help you at all. To get out of this labyrinth, you need Theseus's thread. And here, the thread is "knowledge." If you know how to steer the model, you can get it unstuck. If you know how to fix a bug, you can move forward. If you know how to break down a problem and architect the solution, you'll go much farther. But that's not vibe-coding anymore. That's called Software Engineering.
English
318
94
1K
77.1K
Andrés Meza-Escallón retweetledi
Andrés Meza-Escallón
Andrés Meza-Escallón@SoftwareShaper·
"If it hurts, do it more often" — Martin Fowler
Andrés Meza-Escallón tweet media
English
0
1
0
38
Andrés Meza-Escallón retweetledi
Andrés Meza-Escallón
Andrés Meza-Escallón@SoftwareShaper·
These gaps cannot be eliminated; they can only be shrunk by failing faster and get feedback frequently
Andrés Meza-Escallón tweet media
English
0
1
0
40
Andrés Meza-Escallón retweetledi
Aakash Gupta
Aakash Gupta@aakashgupta·
LeCun has been saying “scaling is dead, LLMs won’t reach AGI” for three years straight. The response: “There he goes again.” Meta reportedly started viewing him as “the odd man out.” WSJ runs headlines about the internal rift. Twitter dunks on him weekly. He leaves to start a World Model startup and people shrug. Meanwhile, Ilya drops a Dwarkesh interview TODAY saying the exact same thing: “2020-2025 was the age of scaling. That era is ending. We’re back to the age of research.” The response: 4,000 quote tweets calling him a prophet. “Ilya sees what others can’t.” “The oracle has spoken.” SSI raises $3 billion on vibes and a promise to do undefined research. Same take. Same timeline. Same “LLMs can’t generalize” argument. One gets the attractive guy treatment. One gets HR called. The meme is literally playing out in real time. LeCun’s been running the “scaling is saturating” bit since CES. Ilya says it and it’s treated like tablets coming down from the mountain. LeCun’s been saying it for years = background noise. Ilya says it once = paradigm shift. AI discourse has never been about the argument. It’s about who makes it.
wyqtor@wyqtor

@jm_alexia

English
40
27
327
47K