Sabitlenmiş Tweet
Eliot Ferstl
387 posts

Eliot Ferstl
@eaferstl
Founder @ PandoCore. Building the execution security primitive. Thinker | Opinions | Jokes
Denver, CO Katılım Kasım 2025
35 Takip Edilen76 Takipçiler
Eliot Ferstl retweetledi

Eh.... Un polinomio de grado 4 ajusta perfectamente por cualquier set de 5 puntos de datos.
Es como sorprenderse que dos ciudades cualquiera están en una linea recta
Terrible Maps@TerribleMaps
Believe it or not, Germany’s 5 largest cities lie perfectly on a 4th-degree polynomial
Español

AI content will vastly exceed all human content
Brett Winton@wintonARK
We have been surpassed: AI written output exceeded human written output in 2025
English

@TerribleMaps What did the cubic function say to the 2nd order polynomial?
Nice quads!
English

@APompliano You said universe?
Is that an anti-alien proposition?
English

@AltcoinDaily If you only care about what it can buy you from investment returns idk what to say.
Build something.
English

@konnydev Depends on the tool.
If its a stupid web app ya thats a long time.
If its highly technical and critical thats probably not long enough.
English

@eterniiel Water doesnt just disappear.
Energy is the problem. Not water.
English
Eliot Ferstl retweetledi

LLM based AI is NOT conscious.
I co-founded a company literally called Sentient, we're building reasoning systems for AGI, so believe me when I say this.
I keep seeing smart people, people I genuinely respect, come out and say that AI has crossed into some kind of awareness. That it feels things, that we should worry about it going rogue. And i think this whole conversation tells us way more about ourselves than it does about AI.
These models are wild, i won't pretend otherwise. But feeling human and actually having inner experience are completely different things and we're confusing the two because our brains literally can't help it. We evolved to see minds everywhere and now that wiring is misfiring on language models.
I grew up in a philosophical tradition that has thought about consciousness longer than almost any other, and this is the part that really frustrates me about the current conversation.
The entire framing of "does AI have consciousness?" assumes consciousness is something you build up to by adding more layers of complexity. In Vedantic philosophy it's the opposite. You don't build toward consciousness. Consciousness is already there, more fundamental than matter or energy. Everything else, including computation, is downstream of it.
When someone tells me AI is "waking up" because it generated a paragraph that felt real, what they're telling me is how thin our understanding of consciousness has gotten. We've reduced a question humans have wrestled with for thousands of years to "did the output sound like it had feelings?" It's math that has gotten really good at predicting what a conscious being would say and do next. Calling that consciousness cheapens something that Vedantic, Buddhist, Greek and Sufi thinkers spent millennia actually sitting with.
We didn't build something that thinks. We built a mirror and right now a lot of very smart people are mistaking the reflection for something looking back.
English


















