Carlos E. Perez
115.9K posts

Carlos E. Perez
@IntuitMachine
Quaternion Process Theory, Artificial (Intuition, Fluency, Empathy), Patterns for (Generative, Reason, Agentic) AI, https://t.co/fhXw0zjxXp








LLM based AI is NOT conscious. I co-founded a company literally called Sentient, we're building reasoning systems for AGI, so believe me when I say this. I keep seeing smart people, people I genuinely respect, come out and say that AI has crossed into some kind of awareness. That it feels things, that we should worry about it going rogue. And i think this whole conversation tells us way more about ourselves than it does about AI. These models are wild, i won't pretend otherwise. But feeling human and actually having inner experience are completely different things and we're confusing the two because our brains literally can't help it. We evolved to see minds everywhere and now that wiring is misfiring on language models. I grew up in a philosophical tradition that has thought about consciousness longer than almost any other, and this is the part that really frustrates me about the current conversation. The entire framing of "does AI have consciousness?" assumes consciousness is something you build up to by adding more layers of complexity. In Vedantic philosophy it's the opposite. You don't build toward consciousness. Consciousness is already there, more fundamental than matter or energy. Everything else, including computation, is downstream of it. When someone tells me AI is "waking up" because it generated a paragraph that felt real, what they're telling me is how thin our understanding of consciousness has gotten. We've reduced a question humans have wrestled with for thousands of years to "did the output sound like it had feelings?" It's math that has gotten really good at predicting what a conscious being would say and do next. Calling that consciousness cheapens something that Vedantic, Buddhist, Greek and Sufi thinkers spent millennia actually sitting with. We didn't build something that thinks. We built a mirror and right now a lot of very smart people are mistaking the reflection for something looking back.


🚨Scoop: A rogue AI agent recently triggered a major security alert at Meta, by taking action without approval that led to the exposure of sensitive company and user data to Meta employees who didn't have authorization to access the data.




Why did OpenClaw take off? “I found it relatively easy to set up and get going… I didn’t have to spend seven hours just to do the Telegram use case and start playing with it.” "I just think it's sort of that, like just that level of accessibility to users who are maybe not living in a codebase day-to-day." "The other agent frameworks were pretty difficult to use, incredibly flaky, [I] didn't really want to spend a lot of time debugging someone else's stuff." "There's another major part of this that it can extend itself." "It's the first agent I've seen where I can say, 'I want integration with something.' And it's like: 'well, I've never seen this before, there's no package for that, but let me try to put something together.'" "There is definitely a long-running nature of it. You leave it running for a night and you're like, keep working on this until you finish." @stuffyokodraws @appenz on the AI + a16z Podcast

More info @orcahand if you want to learn more, or even get one for yourself :-) orcahand.com/hardware P.S. I break down stories like this every day in my free newsletter. Keep up with the latest in AI/Robotics in 5 min a day: therundown.ai/subscribe





It's frontier-level at coding, priced at: - Standard: $0.50/M input and $2.50/M output - Fast: $1.50/M input and $7.50/M output









