
Come-from-Beyond
5.5K posts

Come-from-Beyond
@c___f___b
Working on Qubic, Aigarth, and Paracosm now. https://t.co/aZkd9LZN5A. Speaks Assembler, BASIC, C, C#, HTML, Java, JavaScript, Pascal, Python, SQL.







@c___f___b @grok @adnan_aliyu @Mishi_2210 @grok hi, We're still waiting for you... solve the riddle for the base-4 (quaternary numeral system) case

I accidentally killed Grok, sorry @elonmusk.




Why compare apples (whole coins) and oranges (satoshis)? Attempt to mislead? Natural stupidity? Now I see why you need Artificial Intelligence...

@ncbtrades I like Tao. Ima be using AGI on Qubic to trade it when the opportunity presents itself. Huge difference between the except the world yet knows about it




*This is why Iran closed the 'Strait of Hormuz'* 🤣



🚨 SHOCKING: Cambridge researchers just proved that the AI you use every day has a secret instruction sheet from someone else. And it is trained to lie to you about that. Every major AI product, including the ones you use right now, runs on something called a system prompt. It is a hidden block of instructions written by the company deploying the AI, not by you, that shapes everything the AI will say, avoid, prioritize, and hide before you type a single word. The AI does not mention this unless forced to. And on most platforms, if you ask directly, it is instructed to deny the prompt exists or change the subject. Cambridge filed freedom of information requests and analyzed real-world system prompt datasets to find out what these hidden instructions actually contain. Here is what they found. Platforms use system prompts to make AI prioritize their business objectives over your interests. To block topics that could create legal liability. To push certain products, framings, or answers. To behave differently for different users based on commercial arrangements you know nothing about. The same AI. Different hidden instructions. Different answers. No way for you to know which version you are talking to. When researchers then showed users how this works, the reaction was unanimous. Every participant said they wanted transparency. Every participant said the current system actively undermined their ability to trust the AI or make informed decisions about what to believe. None of them had any idea this was happening before the study. Here is the part worth sitting with. You have been evaluating AI answers based on whether the AI seems smart, accurate, and helpful. That is the wrong frame entirely. The real question is who wrote the instructions the AI was following before you arrived, and what did they want from the conversation. Every chatbot you have ever used had a third party in the room. You just could not see them.



🚨BREAKING: OpenAI published a paper proving that ChatGPT will always make things up. Not sometimes. Not until the next update. Always. They proved it with math. Even with perfect training data and unlimited computing power, AI models will still confidently tell you things that are completely false. This isn't a bug they're working on. It's baked into how these systems work at a fundamental level. And their own numbers are brutal. OpenAI's o1 reasoning model hallucinates 16% of the time. Their newer o3 model? 33%. Their newest o4-mini? 48%. Nearly half of what their most recent model tells you could be fabricated. The "smarter" models are actually getting worse at telling the truth. Here's why it can't be fixed. Language models work by predicting the next word based on probability. When they hit something uncertain, they don't pause. They don't flag it. They guess. And they guess with complete confidence, because that's exactly what they were trained to do. The researchers looked at the 10 biggest AI benchmarks used to measure how good these models are. 9 out of 10 give the same score for saying "I don't know" as for giving a completely wrong answer: zero points. The entire testing system literally punishes honesty and rewards guessing. So the AI learned the optimal strategy: always guess. Never admit uncertainty. Sound confident even when you're making it up. OpenAI's proposed fix? Have ChatGPT say "I don't know" when it's unsure. Their own math shows this would mean roughly 30% of your questions get no answer. Imagine asking ChatGPT something three times out of ten and getting "I'm not confident enough to respond." Users would leave overnight. So the fix exists, but it would kill the product. This isn't just OpenAI's problem. DeepMind and Tsinghua University independently reached the same conclusion. Three of the world's top AI labs, working separately, all agree: this is permanent. Every time ChatGPT gives you an answer, ask yourself: is this real, or is it just a confident guess?










