
INDO
958 posts









🚨Breaking: Princeton researchers just ran the numbers on where AI is actually heading. The results should make every founder, investor, and policymaker stop what they are doing. Training OpenAI's next-gen model consumes an estimated 11 billion kWh of electricity. That is enough to power every home in New York City for a full year. More than the annual output of a nuclear reactor. For one model. One training run. And that is before a single user asks a single question. Every time someone uses a reasoning model like o1 or DeepSeek-R1, it costs 33 Wh of energy per query. A standard GPT-4 query costs 0.42 Wh. That is a 79x energy multiplier. Per query. At billions of queries per day. Now here is what nobody is saying out loud. The industry's answer to this is Stargate. A $500 billion compute campus. 5 gigawatts of power. Enough to run 5 million homes. Owned by the same four companies that already control the technology. They are building a new kind of utility. Except you do not elect its board. Meanwhile the models consuming all that energy still cannot reliably reason outside of math and code. Everywhere else they pattern-match. They hallucinate. They confabulate confidence. Princeton's argument is that this is not a scaling problem. It is a structural one. More parameters have not fixed it. More data has not fixed it. The architecture itself is the ceiling. Their alternative: stop chasing one god-model and build thousands of small specialists instead. Each one trained on curated domain data. Each one grounded in verified knowledge. Each one small enough to run on your phone. The energy comparison is not close. A cloud query to a reasoning model uses 33 Wh and 20 milliliters of water. The same query on a local specialist model uses 0.001 Wh. Zero water. That is 10,000 times more efficient. AlphaFold did not beat biologists by knowing everything. It won by going impossibly deep in one domain. A 14 billion parameter model trained on medical knowledge graphs just outperformed GPT-5.2 on complex clinical reasoning. Depth beats breadth when the domain is defined. The question nobody building these systems wants to answer: If the only path to general AI requires the energy output of a small nation, controlled by a handful of companies, running on hardware most of the world cannot access — is that actually intelligence? Or is it just the most expensive pattern matcher ever built?














🚨 SHOCKING: Cambridge researchers just proved that the AI you use every day has a secret instruction sheet from someone else. And it is trained to lie to you about that. Every major AI product, including the ones you use right now, runs on something called a system prompt. It is a hidden block of instructions written by the company deploying the AI, not by you, that shapes everything the AI will say, avoid, prioritize, and hide before you type a single word. The AI does not mention this unless forced to. And on most platforms, if you ask directly, it is instructed to deny the prompt exists or change the subject. Cambridge filed freedom of information requests and analyzed real-world system prompt datasets to find out what these hidden instructions actually contain. Here is what they found. Platforms use system prompts to make AI prioritize their business objectives over your interests. To block topics that could create legal liability. To push certain products, framings, or answers. To behave differently for different users based on commercial arrangements you know nothing about. The same AI. Different hidden instructions. Different answers. No way for you to know which version you are talking to. When researchers then showed users how this works, the reaction was unanimous. Every participant said they wanted transparency. Every participant said the current system actively undermined their ability to trust the AI or make informed decisions about what to believe. None of them had any idea this was happening before the study. Here is the part worth sitting with. You have been evaluating AI answers based on whether the AI seems smart, accurate, and helpful. That is the wrong frame entirely. The real question is who wrote the instructions the AI was following before you arrived, and what did they want from the conversation. Every chatbot you have ever used had a third party in the room. You just could not see them.








