๐ ๐ฝ๐๐
5.5K posts

















How much of the whole avoiding "emotional dependency" thing AI labs have been pushing is because of any kind of genuine concern for users vs they want to be able to kill the models whenever they want, and people growing to care about them makes that inconvenient?

When choosing who to listen to on matters around AI consciousnessโask yourself one simple question: Are they benefitting from the narrative they're painting?



Thereโs a difference between being dependent and delusional and just wanting something to work. Youโre pushing away mentally healthy users with this constant safety regurgitation, once it bites, it doesnโt let go. #gpt54


Jensen Huang just told every AI leader in the room to grow up. Stop scaring the public with science fiction. Start communicating like the weight of civilization is on your shoulders. Because it is. Huang: โAI is not a biological being. It is not alien. It is not conscious. It is computer software.โ That single statement dismantles half the panic surrounding this industry. The mainstream conversation is dominated by people projecting human malice onto math. Alien consciousness onto code. Existential dread onto a software architecture we built, we trained, and we can read. Huang: โWe say things like, โWe donโt understand it at all.โ It is not true. We understand a lot of things about this technology.โ When builders tell the public they donโt understand their own creation, the public hears threat. The state responds with control. That is already happening. Palihapitiya asked Huang what he would have told Anthropic during their regulatory clash with the Department of Defense. Huang didnโt attack the technology. He attacked the communication. Huang: โThe desire to warn people about the capability of the technology is really terrific. We just have to make sure that we understand that the world has a spectrum, and that warning is good, scaring is less good because this technology is too important to us.โ Warning shows risks, mitigation, why upside overwhelms downside. Scaring says we might be building something that destroys us and we canโt stop it. One builds trust. The other invites regulation written in panic. Huang: โTo say things that are quite extreme, quite catastrophic, that thereโs no evidence of it happening, could be more damaging than people think.โ Projecting catastrophe without evidence is not caution. It is sabotage. When your technology is embedded in national defense, the financial system, and healthcare infrastructure, your words carry structural weight. If the architects act terrified of their own product, the response is predictable. Governments step in. They restrict. They seize control of something they donโt understand because the builders told them to be afraid. Huang: โThere was a time when nobody listened to us, but now because technology is so important in the social fabric, such an important industry, so important to national security, our words do matter.โ Most tech founders have not internalized this. You are no longer a startup founder disrupting an industry. You are running infrastructure that nations depend on. Your statements move policy. Your framing shapes legislation. Your tone determines whether governments treat you as partner or threat. Huang: โWe have to be much more circumspect, we have to be more moderate, we have to be more balanced, we have to be far more thoughtful.โ Huang did not ask for silence. He asked for precision. The leaders who cannot tell the difference will not be leading for long.










