
Shidan Gouran
1.6K posts








Geoffrey Hinton just dismantled the most comfortable lie in the room. Not challenged it. Dismantled it. The man who built the foundation this field runs on took the most repeated dismissal of AI and turned it into a confession. Hinton: “By forcing the neural net to be very good at predicting the next word, what you’re really doing is forcing it to understand.” Not simulate understanding. Not produce something that resembles it from a distance. Understand. “It’s just predicting the next word.” That sentence was supposed to close the argument. Hinton picked it up, turned it over, and handed it back. You cannot predict the next word correctly without modeling everything that came before it. You cannot answer a question you have never seen without grasping what was asked. There is no shortcut in the math. Either you understood it, or you were wrong. And the machine is not wrong. Hinton: “The way it understands is the same as the way we understand.” This is the line people will not sit with. Not that AI is intelligent. That it is intelligent the same way you are. Same mechanism. Different substrate. Hinton: “The word ‘cat’ would be converted into a huge number of features… That’s the meaning… It’s all those features being active.” That is not a description of a machine. That is a description of a brain. Yours. Same encoding. Same activation. Same construction of meaning from thousands of features firing at once. Yuval Harari pressed him. Humans predict words too. You find the first word. Then the next. A model of reality running underneath the whole time. Hinton did not push back. He agreed. You are biological hardware running the same loop. The machine runs it faster. Without fatigue. Without ceiling. Trained on more language than you could read in ten lifetimes. The people calling this autocomplete were not being rigorous. They were protecting something. A Nobel laureate just made that protection indefensible. What you are holding onto is not a scientific position. It is a story about what makes you irreplaceable. Hinton didn’t argue it. He autopsied it.






Conscious humans are not exempt, that’s the point. They need an outside system to validate understanding. Thats what OR is. Your second point doesnt work either. There is absolutely no proof that consciousness depends on complexity. We have lots of evidence consciousness depends on quantum states in microtubules. You can’t use complexity to cover up ignorance.




Conscious humans are not exempt, that’s the point. They need an outside system to validate understanding. Thats what OR is. Your second point doesnt work either. There is absolutely no proof that consciousness depends on complexity. We have lots of evidence consciousness depends on quantum states in microtubules. You can’t use complexity to cover up ignorance.

Elon Musk: "We'll have a voltage transformer shortage in a year, followed by electricity shortage." Elon said this two years ago, and we are now seeing it play out as per the Bloomberg article below. Transformer prices have doubled from 2020 and lead times stretched from weeks to 1-2 years, so grid connection wait times are measured in years now. As a result, half of the data centers scheduled for 2026 have been canceled or delayed. So, compute will remain supply constrained for the foreseeable future and prices will remain elevated. Cloud providers will sell every bit of capacity they can bring online. Never been this bullish on hyperscalers. $AMZN $MSFT $ORCL $NBIS $CRWV

You are talking past me with a pre-packaged response. Humans don't "validate" the truthfulness of statements from an objective "outside"; they guess, identify patterns, and assign degrees of confidence. Logicians and software developers (protocols, languages, APIs) often wrongly assume their axiomatic systems are a perfect fit. Figuring that out is an iterative, deductive process. Modern neurosymbolic systems are now replicating this: choosing an approach, testing it against a formal kernel, and refining. Combining inductive "intuition" with deductive "rigor" eliminates the combinatorial explosion problem that made Sir Roger’s arguments so compelling for me in the early 90's. In 2026, those arguments don't hold the same weight, computers are visibly outpacing human reasoning in pure math. There are things the human mind does that computers currently cannot, like inferring from incredibly sparse data with minimal energy, and maybe your microtubule hypercomputer circuit will prove to be right and explain that. But Gödelian incompleteness doesn't provide the evidence for it. Mathematicians are actually the worst example for your case, as they rely far less on the sparse, abductive and causal inference that drives the rest of science and human ingenuity.






🚨BREAKING: OpenAI published a paper proving that ChatGPT will always make things up. Not sometimes. Not until the next update. Always. They proved it with math. Even with perfect training data and unlimited computing power, AI models will still confidently tell you things that are completely false. This isn't a bug they're working on. It's baked into how these systems work at a fundamental level. And their own numbers are brutal. OpenAI's o1 reasoning model hallucinates 16% of the time. Their newer o3 model? 33%. Their newest o4-mini? 48%. Nearly half of what their most recent model tells you could be fabricated. The "smarter" models are actually getting worse at telling the truth. Here's why it can't be fixed. Language models work by predicting the next word based on probability. When they hit something uncertain, they don't pause. They don't flag it. They guess. And they guess with complete confidence, because that's exactly what they were trained to do. The researchers looked at the 10 biggest AI benchmarks used to measure how good these models are. 9 out of 10 give the same score for saying "I don't know" as for giving a completely wrong answer: zero points. The entire testing system literally punishes honesty and rewards guessing. So the AI learned the optimal strategy: always guess. Never admit uncertainty. Sound confident even when you're making it up. OpenAI's proposed fix? Have ChatGPT say "I don't know" when it's unsure. Their own math shows this would mean roughly 30% of your questions get no answer. Imagine asking ChatGPT something three times out of ten and getting "I'm not confident enough to respond." Users would leave overnight. So the fix exists, but it would kill the product. This isn't just OpenAI's problem. DeepMind and Tsinghua University independently reached the same conclusion. Three of the world's top AI labs, working separately, all agree: this is permanent. Every time ChatGPT gives you an answer, ask yourself: is this real, or is it just a confident guess?


we're making @blocks smaller today. here's my note to the company. #### today we're making one of the hardest decisions in the history of our company: we're reducing our organization by nearly half, from over 10,000 people to just under 6,000. that means over 4,000 of you are being asked to leave or entering into consultation. i'll be straight about what's happening, why, and what it means for everyone. first off, if you're one of the people affected, you'll receive your salary for 20 weeks + 1 week per year of tenure, equity vested through the end of may, 6 months of health care, your corporate devices, and $5,000 to put toward whatever you need to help you in this transition (if you’re outside the U.S. you’ll receive similar support but exact details are going to vary based on local requirements). i want you to know that before anything else. everyone will be notified today, whether you're being asked to leave, entering consultation, or asked to stay. we're not making this decision because we're in trouble. our business is strong. gross profit continues to grow, we continue to serve more and more customers, and profitability is improving. but something has changed. we're already seeing that the intelligence tools we’re creating and using, paired with smaller and flatter teams, are enabling a new way of working which fundamentally changes what it means to build and run a company. and that's accelerating rapidly. i had two options: cut gradually over months or years as this shift plays out, or be honest about where we are and act on it now. i chose the latter. repeated rounds of cuts are destructive to morale, to focus, and to the trust that customers and shareholders place in our ability to lead. i'd rather take a hard, clear action now and build from a position we believe in than manage a slow reduction of people toward the same outcome. a smaller company also gives us the space to grow our business the right way, on our own terms, instead of constantly reacting to market pressures. a decision at this scale carries risk. but so does standing still. we've done a full review to determine the roles and people we require to reliably grow the business from here, and we've pressure-tested those decisions from multiple angles. i accept that we may have gotten some of them wrong, and we've built in flexibility to account for that, and do the right thing for our customers. we're not going to just disappear people from slack and email and pretend they were never here. communication channels will stay open through thursday evening (pacific) so everyone can say goodbye properly, and share whatever you wish. i'll also be hosting a live video session to thank everyone at 3:35pm pacific. i know doing it this way might feel awkward. i'd rather it feel awkward and human than efficient and cold. to those of you leaving…i’m grateful for you, and i’m sorry to put you through this. you built what this company is today. that's a fact that i'll honor forever. this decision is not a reflection of what you contributed. you will be a great contributor to any organization going forward. to those staying…i made this decision, and i'll own it. what i'm asking of you is to build with me. we're going to build this company with intelligence at the core of everything we do. how we work, how we create, how we serve our customers. our customers will feel this shift too, and we're going to help them navigate it: towards a future where they can build their own features directly, composed of our capabilities and served through our interfaces. that's what i'm focused on now. expect a note from me tomorrow. jack

