diegodl

3.1K posts

diegodl

diegodl

@diegodl

Datos, música y cualquier otra cosa. Director de Producto @TheBridge_Tech

Katılım Ocak 2008
937 Takip Edilen281 Takipçiler
diegodl retweetledi
Santiago
Santiago@svpino·
I don’t believe people who say they are running “12 parallel coding agents”. Either they are lying for clicks, or I’m a complete retard who can barely keep up with a single Claude instance.
English
469
56
1.7K
126.6K
diegodl retweetledi
CG
CG@cgtwts·
> be Yann LeCun > spend years building JEPA at Meta > company focuses on LLaMA instead > his idea stays complicated and unused > robotics plans get dropped > decides to leave and start AMI Labs > builds a much simpler version from scratch > trains it on normal hardware in just a few hours > removes all the complicated tricks and keeps it simple Results: -uses 200x less data than similar systems -makes decisions 50x faster -runs on a single GPU instead of massive clusters -simple to train -understands movement, objects, and space -can tell when something is physically impossible -learns how the real world works without being explicitly taught.
Aakash Gupta@aakashgupta

Earlier this year Yann LeCun left Meta because Mark Zuckerberg wouldn't bet the company on JEPA. Last week his group dropped the first JEPA that actually trains end-to-end from raw pixels. 15 million parameters. Single GPU. A few hours. The timing is not a coincidence. For four years Meta has been the house that JEPA built. LeCun published the original paper from FAIR in 2022. I-JEPA and V-JEPA came out of his lab. The architecture was supposed to be the escape hatch from LLMs, the path to robots that actually learn physics instead of hallucinating about it. Every version shipped fragile. Stop-gradients. Exponential moving averages. Frozen pretrained encoders. Six or seven loss terms that had to be hand-tuned or the model collapsed into garbage representations. Meta kept funding LLMs. Llama shipped. Llama scaled. Llama got beat by Qwen and DeepSeek. Zuck spent $14 billion to buy ScaleAI and install Alexandr Wang. The FAIR robotics group was dissolved. LeCun's research kept winning papers and losing the product roadmap. He left, started AMI Labs, and said publicly that LLMs were a dead end. Now the paper. LeWorldModel. One regularizer replaces the entire pile of heuristics. Project the latent embeddings onto random directions, run a normality test, penalize deviation from Gaussian. The model cannot collapse because collapsed embeddings fail the test by construction. Hyperparameter search went from O(n^6) polynomial to O(log n) logarithmic. Six tunable knobs became one. The downstream numbers are what should scare the robotics capex class. 200 times fewer tokens per observation than DINO-WM. Planning time drops from 47 seconds to 0.98 seconds per cycle. 48x faster at matching or beating foundation-model performance on Push-T and 3D cube control. The latent space probes cleanly for agent position, block velocity, end-effector pose. It correctly flags physically impossible events as surprising. It learned physics without being told physics existed. Figure AI is valued at $39 billion. Tesla Optimus is mass-producing. World Labs raised $230 million to sell generative world models. Everyone in humanoid robotics is burning capital on foundation-model pipelines that plan in 47 seconds per cycle. LeCun's group just showed you can do it with 15 million parameters on a single GPU in a few hours. This is the Xerox PARC pattern running again. Meta had the next architecture. Meta had the scientist. Meta dissolved the robotics team, passed on the productization, and watched the exit. Three months later the lab that was supposed to be Meta's publishes the result that resets the robotics cost structure. The paper is worth more than Alexandr Wang.

English
54
323
4.5K
851.4K
diegodl retweetledi
Jesús López
Jesús López@_JesusLopez_·
Una genialidad en la TV británica, sobre cómo se toman sus vacaciones los ingleses en España
Español
303
3.5K
15.1K
1.2M
diegodl
diegodl@diegodl·
@fenris1234 @lagamez Gracias por traer esta mierda a mi timeline. A veces deberíamos pensar si denunciando difundimos más
Español
0
0
0
46
diegodl retweetledi
JM Martín-Olalla
JM Martín-Olalla@MartinOlalla_JM·
Llevo 12 años explicando la utilidad del molesto cambio estacional de la hora, primero desde @politikon_es después en periódicos y revistas científicas. Esta semana hablé para @eldebate_com y continué dando con la misma explicación. Aquí el resultado: eldebate.com/ciencia/202603…
Español
2
15
23
2.9K
diegodl retweetledi
Jerónimo López
Jerónimo López@jerolba·
Aunque hubieran escondido 1000€, el hecho de manipular el portal de transparencia debería ser un escándalo que haría dimitir a todo su gobierno. Aquí no pasa nada, y por eso seguirán haciéndolo, e irá a más.
Arnau Borràs@arnaubor

1) Un particular crea subvencions.cat 2) Es descobreix què a Catalunya es destinen 5.000 milions anuals a subvencions (!) 3) La Generalitat inhabilita l'accés al portal de transparència durant 28 hores 4) Quan rehabilita l'accés, es descobreix que han eliminat 1 milió de registres, dificultant així que la ciutadania pugui fiscalitzar on van i a qui regalen els diners públics 5) Un exemple: Open Arms ha rebut uns 2'5 milions d'euros d'institucions públiques catalanes els últims anys. Al portal de transparència només n'apareixen uns 550.000. És una vergonya i han de donar explicacions. I que no ens prenguin el pèl parlant d'un "error tècnic".

Español
1
92
239
8.7K
diegodl
diegodl@diegodl·
El Deep Research de Gemini ha empeorado muchisimo. Probablemente antes era una de las herramientas más infravaloradas para lo que ofrecía. Pinta que empezamos a jugar con los esfuerzos de inferencia y se nota
Español
0
0
0
37
diegodl retweetledi
Adam Grant
Adam Grant@AdamMGrant·
The books you love are a window into your personality. •Mystery & self-improvement attract conscientious people •Sci-fi, psychology, philosophy draw open-minded people •Memoir & horror appeal to neurotic people Reading doesn't just shape our views. It reveals what we're like.
Adam Grant tweet mediaAdam Grant tweet mediaAdam Grant tweet media
English
95
548
2.9K
192.7K
diegodl retweetledi
Xavier Mitjana
Xavier Mitjana@xavier_mitjana·
Hace año y medio. Recuerdo que en una entrevista me preguntaron. ¿Cuánto tiempo crees que tardará la IA en cambiar cómo vivimos? Y respondí de 10 a 15 años. La reacción fue un poco de estupefacción, porque viniendo de mí seguro que esperaban otra respuesta. Pero la razón de mi respuesta no radicaba en la tecnología (la IA me parece maravillosa). Mi conclusión se basaba en lo que muestra este gráfico de una publicación de Anthropic. Las previsiones optimistas siempre se hacen en base a lo que la IA potencialmente puede hacer (área azul), no en cómo realmente se usa (área roja). Ese proceso de adopción es muchísimo más lento y progresivo. El cuello de botella hace tiempo que somos nosotros.
Xavier Mitjana tweet media
Español
43
112
593
62.1K
diegodl retweetledi
César Coca
César Coca@cesarcocag·
Hay gente muy muy sectaria que dice que es imposible que existan buenas personas de derechas (o de izquierdas). Para todos ellos esta magnífica columna de Manuel Vicent en El País.
César Coca tweet media
Español
17
237
620
36.3K
diegodl retweetledi
Paul Graham
Paul Graham@paulg·
Prediction: In the AI age, taste will become even more important. When anyone can make anything, the big differentiator is what you choose to make. paulgraham.com/taste.html
English
846
1.6K
12.9K
2M
diegodl retweetledi
Bindu Reddy
Bindu Reddy@bindureddy·
Cheap models are extremely under-rated! These models will automate routine tasks and will play a key role in achieveing Best cheap / small models in the world today Gemini Flash 3 - excellent price for performance Kimi K2.5 - very good on benchmarks Haiku 4.5 - faster and better than Flash GPT 5 nano - insanely fast, great for a classifier Qwen family - great for fine-tuning
English
43
13
278
23.9K
diegodl
diegodl@diegodl·
Karpathy escribe y los pequeños hobbits leemos con atencion
Andrej Karpathy@karpathy

A few random notes from claude coding quite a bit last few weeks. Coding workflow. Given the latest lift in LLM coding capability, like many others I rapidly went from about 80% manual+autocomplete coding and 20% agents in November to 80% agent coding and 20% edits+touchups in December. i.e. I really am mostly programming in English now, a bit sheepishly telling the LLM what code to write... in words. It hurts the ego a bit but the power to operate over software in large "code actions" is just too net useful, especially once you adapt to it, configure it, learn to use it, and wrap your head around what it can and cannot do. This is easily the biggest change to my basic coding workflow in ~2 decades of programming and it happened over the course of a few weeks. I'd expect something similar to be happening to well into double digit percent of engineers out there, while the awareness of it in the general population feels well into low single digit percent. IDEs/agent swarms/fallability. Both the "no need for IDE anymore" hype and the "agent swarm" hype is imo too much for right now. The models definitely still make mistakes and if you have any code you actually care about I would watch them like a hawk, in a nice large IDE on the side. The mistakes have changed a lot - they are not simple syntax errors anymore, they are subtle conceptual errors that a slightly sloppy, hasty junior dev might do. The most common category is that the models make wrong assumptions on your behalf and just run along with them without checking. They also don't manage their confusion, they don't seek clarifications, they don't surface inconsistencies, they don't present tradeoffs, they don't push back when they should, and they are still a little too sycophantic. Things get better in plan mode, but there is some need for a lightweight inline plan mode. They also really like to overcomplicate code and APIs, they bloat abstractions, they don't clean up dead code after themselves, etc. They will implement an inefficient, bloated, brittle construction over 1000 lines of code and it's up to you to be like "umm couldn't you just do this instead?" and they will be like "of course!" and immediately cut it down to 100 lines. They still sometimes change/remove comments and code they don't like or don't sufficiently understand as side effects, even if it is orthogonal to the task at hand. All of this happens despite a few simple attempts to fix it via instructions in CLAUDE . md. Despite all these issues, it is still a net huge improvement and it's very difficult to imagine going back to manual coding. TLDR everyone has their developing flow, my current is a small few CC sessions on the left in ghostty windows/tabs and an IDE on the right for viewing the code + manual edits. Tenacity. It's so interesting to watch an agent relentlessly work at something. They never get tired, they never get demoralized, they just keep going and trying things where a person would have given up long ago to fight another day. It's a "feel the AGI" moment to watch it struggle with something for a long time just to come out victorious 30 minutes later. You realize that stamina is a core bottleneck to work and that with LLMs in hand it has been dramatically increased. Speedups. It's not clear how to measure the "speedup" of LLM assistance. Certainly I feel net way faster at what I was going to do, but the main effect is that I do a lot more than I was going to do because 1) I can code up all kinds of things that just wouldn't have been worth coding before and 2) I can approach code that I couldn't work on before because of knowledge/skill issue. So certainly it's speedup, but it's possibly a lot more an expansion. Leverage. LLMs are exceptionally good at looping until they meet specific goals and this is where most of the "feel the AGI" magic is to be found. Don't tell it what to do, give it success criteria and watch it go. Get it to write tests first and then pass them. Put it in the loop with a browser MCP. Write the naive algorithm that is very likely correct first, then ask it to optimize it while preserving correctness. Change your approach from imperative to declarative to get the agents looping longer and gain leverage. Fun. I didn't anticipate that with agents programming feels *more* fun because a lot of the fill in the blanks drudgery is removed and what remains is the creative part. I also feel less blocked/stuck (which is not fun) and I experience a lot more courage because there's almost always a way to work hand in hand with it to make some positive progress. I have seen the opposite sentiment from other people too; LLM coding will split up engineers based on those who primarily liked coding and those who primarily liked building. Atrophy. I've already noticed that I am slowly starting to atrophy my ability to write code manually. Generation (writing code) and discrimination (reading code) are different capabilities in the brain. Largely due to all the little mostly syntactic details involved in programming, you can review code just fine even if you struggle to write it. Slopacolypse. I am bracing for 2026 as the year of the slopacolypse across all of github, substack, arxiv, X/instagram, and generally all digital media. We're also going to see a lot more AI hype productivity theater (is that even possible?), on the side of actual, real improvements. Questions. A few of the questions on my mind: - What happens to the "10X engineer" - the ratio of productivity between the mean and the max engineer? It's quite possible that this grows *a lot*. - Armed with LLMs, do generalists increasingly outperform specialists? LLMs are a lot better at fill in the blanks (the micro) than grand strategy (the macro). - What does LLM coding feel like in the future? Is it like playing StarCraft? Playing Factorio? Playing music? - How much of society is bottlenecked by digital knowledge work? TLDR Where does this leave us? LLM agent capabilities (Claude & Codex especially) have crossed some kind of threshold of coherence around December 2025 and caused a phase shift in software engineering and closely related. The intelligence part suddenly feels quite a bit ahead of all the rest of it - integrations (tools, knowledge), the necessity for new organizational workflows, processes, diffusion more generally. 2026 is going to be a high energy year as the industry metabolizes the new capability.

Español
0
0
0
33
diegodl
diegodl@diegodl·
El nivel de cuento cosas, pero no del todo de la @La_SER es lamentable. La noticia del maquinista, ahora esto. Nos merecemos que la radio más escuchada del país y de la que yo soy oyente eleve el nivel, porque encima les escucharemos dando clases sobre bulos. Muy triste
Cadena SER@La_SER

🔴 Renfe comunicó a Emergencias la existencia de heridos en el tren Alvia a las 20:00 horas el día del accidente de Adamuz La SER publica en exclusiva el audio de la conversación entre la trabajadora de Renfe y el Centro de Coordinación de Emergencias cadenaser.com/nacional/2026/…

Español
0
0
0
54
diegodl retweetledi
Luis Garicano 🇪🇺🇺🇦
Es usted el gobierno de España y tiene 1371 millones para gastar en transporte. Su red ferroviaria y de carreteras se cae en pedazos. Qué decide: - Invertir los 1371 m. en mantenimiento de las infraestructuras. - Comprar votos subvencionando un bono de transporte a 60 euros.
Luis Garicano 🇪🇺🇺🇦 tweet mediaLuis Garicano 🇪🇺🇺🇦 tweet media
Español
99
1.2K
2.6K
81.3K