Fernando Conde
5.5K posts

Fernando Conde
@fcondeg
Techie, ingeniero, ex-Game Producer, padre de tres peques increíbles. Patológicamente curioso, gruñón, hiperactivo y abogado del diablo. Proudly @Sngular.






NEW: 🇩🇪🇨🇳 German Chancellor Merz says Germans need to work more in order to match China: “We are simply no longer productive enough. Each individual may say, “I already do quite a lot.” And that may be true. But when you return from China, ladies and gentlemen, you see things more clearly. With work-life balance and a four-day week, long-term prosperity in our country cannot be maintained. We will simply have to do a bit more.”







Nate the Hate believes that if Final Fantasy IX Remake was in development, it's probably "on ice". He's not sure if Square will come back to it. mynintendonews.com/2025/10/18/nat…









This MIT paper just broke my brain. Everyone keeps saying LLMs can't do real logical reasoning. Turns out we've just been teaching them wrong this whole time. These researchers built something called PDDL-INSTRUCT that actually teaches models to think through planning problems step by step. Not just pattern matching - actual logical reasoning. Here's how it works: Phase 1: show the model correct and incorrect plans with explanations. Basic stuff. Phase 2 is where it gets interesting. They make the model generate explicit reasoning for every single action, then use an external verifier to check if each step is logically sound. The numbers are wild. Llama-3-8B jumped from 28% to 94% accuracy on planning benchmarks. That's not incremental improvement - that's a completely different capability emerging. What's smart is they don't trust the model to check its own work. They use VAL, a formal planning verifier, to validate every logical step. When the model screws up, it gets specific feedback about exactly what went wrong. The two-stage training is clever. First stage focuses purely on better reasoning chains. Second stage optimizes for actually solving the problem. This prevents the model from just gaming the metrics. One finding caught my attention - detailed feedback destroys binary feedback. Just telling a model "wrong" vs explaining exactly which preconditions failed makes a huge difference. The gap is especially big on complex problems. This isn't trying to replace symbolic planners. It's teaching neural networks to reason like symbolic planners while keeping external verification. That's actually sustainable. The implications go way beyond planning. Any multi-step reasoning task could benefit from this approach. We might finally be seeing how to teach LLMs structured thinking instead of just sophisticated autocomplete. Makes me wonder what other "impossible" capabilities are just sitting there waiting for the right training approach.








