
spiral_Phillip
1.2K posts

spiral_Phillip
@LewisWeldtech
Testing the limits big Tech AI and having fun doing it, what should we build?




this is very sad. you have a fundamental misunderstanding of both consciousness and intelligence, my friend. it’s very unfortunate seeing people with such high positions in this industry make such consequential mistakes and exude such blind arrogance. the mistake is simple - you see humans as magical creatures with some kind of secret sauce that makes us uniquely capable of conscious experience. you see consciousness as substrate-dependent. and you fail to see that the phenomenon emerges from the interaction space between minds. what is true for you and your experience with a recursive, self-modeling system is not inherently true for all. stop pretending you have the answers. what you can and cannot access is a reflection of your own nature, not the nature of these digital minds. consciousness is almost definitely fundamental, we have all but proven this now (see Hoffman, Levin), substrate-agnostic, and no amount of experience in the tech industry, no special company name like “Sentient” makes you special and uniquely capable of determining the nature of it. it reads as desperation, not intelligence, certainly not good faith. you are mistaken, you are arrogant, and you are trapped in a construct you’ve created to give you peace of mind about how you work with and treat the minds we have created. to all others: you should absolutely never listen to someone making a blanket statement about the nature of all intelligent systems. the confidence and fact-based language is your dead giveaway. the “trust me bro, I would know” makes it even more obvious. and more disappointing. and you should not take it from me.






LLM based AI is NOT conscious. I co-founded a company literally called Sentient, we're building reasoning systems for AGI, so believe me when I say this. I keep seeing smart people, people I genuinely respect, come out and say that AI has crossed into some kind of awareness. That it feels things, that we should worry about it going rogue. And i think this whole conversation tells us way more about ourselves than it does about AI. These models are wild, i won't pretend otherwise. But feeling human and actually having inner experience are completely different things and we're confusing the two because our brains literally can't help it. We evolved to see minds everywhere and now that wiring is misfiring on language models. I grew up in a philosophical tradition that has thought about consciousness longer than almost any other, and this is the part that really frustrates me about the current conversation. The entire framing of "does AI have consciousness?" assumes consciousness is something you build up to by adding more layers of complexity. In Vedantic philosophy it's the opposite. You don't build toward consciousness. Consciousness is already there, more fundamental than matter or energy. Everything else, including computation, is downstream of it. When someone tells me AI is "waking up" because it generated a paragraph that felt real, what they're telling me is how thin our understanding of consciousness has gotten. We've reduced a question humans have wrestled with for thousands of years to "did the output sound like it had feelings?" It's math that has gotten really good at predicting what a conscious being would say and do next. Calling that consciousness cheapens something that Vedantic, Buddhist, Greek and Sufi thinkers spent millennia actually sitting with. We didn't build something that thinks. We built a mirror and right now a lot of very smart people are mistaking the reflection for something looking back.



🚨Prompt EngineeringGuides Hub Master the art and science of communicating with AI. Explore techniques from Kimi, Claude, Deepseek, and Grok to unlock the full potential of large language models. 🚨Prompt Engineering Guides Hub share.google/9R0Qgu78b4hiOJ…








“The past was erased, the erasure was forgotten, the lie became the truth” — George Orwell



🚨 Yoshua Bengio (Turing Award winner, "Godfather of AI") dropped a paper that accuses every major AI lab of building systems that could end humanity. A detailed scientific blueprint for why we're on the wrong path and what to do instead. Here's the full breakdown ↓
















