
whoops
32 posts








To all of you who doubted the HOI4brain thesis: look at this LARP






President Donald Trump said Wednesday he won't use force to acquire Greenland. It is the first time Trump has ruled out using force, having previously been vague about how far he is willing to go in his push. to.pbs.org/4pVNTTD


weirdly enough I think all of these are references to the video game Helldivers 2…




“China has to give us magnets. If they don't, we charge a 200% tariff or something. Nobody needed magnets until they convinced everybody 20 years ago let’s all do magnets." - Trump ?????



The Beijing–Shanghai high-speed railway superimposed on the US





Suppose that you need to write a 1000-LOC function. You ask Grok3, and it fails. In that situation, most devs would fallback to coding manually. I argue that, instead, they should keep giving the AI info, until it succeeds. "Why? Won't it be faster to just type the 1000 lines?" In some cases, perhaps. But you still want the AI to do it, because AIs are *way* less sloppy than humans. They don't flip arguments nor make silly syntax errors. They fail for different (and even complimentary) reasons that humans do: AIs fail because of bad reasoning. So, if you just take charge of the reasoning, you can get the best of both worlds: your code will be in the perfect "AI-generated" shape, and it will have the correct logic, feed by you. This works very well for any reasoning model that exposes its thinking traces, because you can see clearly *why* the AI failed. "Oh, Grok3 spent half of the reasoning trying to figure out where ids are stored. Well, I can just add that to the prompt, and try again." Writing lines manually in 2025 is a harmful, fallback practice.



If you're tired of Windows Explorer's laggy behavior and lack of features, there is some very good news: FilePilot is now in open beta!
















