Phil Frana retweetledi
Phil Frana
26.6K posts

Phil Frana
@ArtificialOther
James Madison University professor and co-editor of the Encyclopedia of AI, Second Edition (Bloomsbury Academic, 2026).
Virginia, USA Katılım Mayıs 2012
2.2K Takip Edilen4.4K Takipçiler

‘So much worse than I even thought’: Utah’s ‘hyperscale’ data center could create massive heat island near Great Salt Lake
sltrib.com/news/environme…
English

NH 96566-KN The First "Computer Bug" history.navy.mil/our-collection…
English
Phil Frana retweetledi
Phil Frana retweetledi

Human thought is becoming less diverse.
Not because of censorship. Not because of authoritarian control.
Because of convenience.
A paper published in August 2025 documents what happens when billions of people outsource their thinking to the same machine and the answer is something the AI industry has never publicly addressed.
The paper asks: toward a standardization of thought?
Sakana AI That subtitle, buried in the research structure, is the most alarming sentence in the document. Not a finding. A question. One the researcher believes we are already living inside without noticing.
Here is the mechanism.
Humans have always thought differently from each other. Different cultures framed problems differently. Different intellectual traditions produced different answers. Different languages encoded different ways of seeing the world. That diversity was not inefficiency. It was resilience. It was the source of innovation, of unexpected solutions, of the friction that produces better ideas.
Algorithmic personalization creates filter bubbles that limit the diversity of opinions, leading to the homogenization of thought and polarization across society.
When the same AI answers the same question for 500 million people, the diversity of starting points compresses. The answers sound reasonable. They sound balanced. They sound like what a thoughtful, educated person would say.
They sound like each other.
As AI systems like ChatGPT achieve unprecedented adoption rates, they effectively function as external memory systems that billions of people increasingly rely upon for mental tasks.
Sakana AIExternal memory. Shared. Global. Centralized. Controlled by a small number of companies making decisions about what that memory contains, how it is organized, and what it surfaces when you ask.
The researcher does not claim this is intentional. That is the point.
It does not need to be intentional to reshape the intellectual landscape of an entire civilization.
Source: Gesnot · arXiv:2508.16628 · August 2025 · arxiv.org/abs/2508.16628

English
Phil Frana retweetledi

Esto es, realmente, un DELIRIO.
Agarraron 2.245 currículums reales escritos por humanos y le pidieron a ChatGPT, DeepSeek y otros modelos que los reescriban. Mismo curriculum, experiencia, estudios... todo igual, solo que reescrito.
Después, le mostraron pares al azar a cada IA y le pidieron que eligiera el mejor: el suyo contra el del humano. Todos se eligieron a sí mismos más del 95% de las veces. Incluso después de controlar por calidad (asegurándose de que el CV humano no fuera objetivamente peor) seguían eligiendo el suyo.
Después, simularon procesos reales de selección en 24 industrias y descubrieron que, si usaste el mismo modelo que el reclutador, tenés entre 23% y 60% más chances de pasar el primer filtro.
¿Por qué pasa esto? Los autores tienen una hipótesis fuerte: cuando le pedís a un modelo que te mejore el CV, te lo reescribe con su huella estilística: sus palabras favoritas, su ritmo, su forma de armar oraciones... Cada IA tiene un estilo propio, como cada escritor tiene una letra. Después, cuando esa misma IA evalúa, se reconoce del otro lado y se pone un diez. Cuanto más capaz es el modelo, más afilada es su capacidad de reconocerse.
Ahora buscar laburo es como el test de Turing pero al revés: en lugar de una máquina intentando convencerte de que es humana, parece que ahora somos nosotros los que tenemos que convencer a los robots que somos uno de ellos.

Español
Phil Frana retweetledi

Stephen Wolfram, founder of Wolfram Research, explains how LLMs are quietly dismantling our deepest assumptions about consciousness:
He argues that large language models have done something philosophy and neuroscience couldn't:
"In terms of consciousness, I have to say, the idea that there's sort of something magic that goes beyond physics that leads to sort of conscious behavior, I kind of think that LLMs kind of put the final nail in that coffin."
His reasoning is that LLMs keep doing things people assumed they couldn't:
"There were all these things where it's like, oh, maybe it can't do this, but actually it does. And it's just an artificial neural net."
Wolfram then challenges a core assumption about conscious experience: the feeling that we are a single, continuous self moving through time.
"I think our notion of consciousness is a lot related to the fact that we believe in the single thread of experience that we have. It's not obvious that we should have a persistent thread of experience."
He points out that physics doesn't actually support this intuition:
"In our models of physics, we're made of different atoms of space at every successive moment of time. So the fact that we have this belief that we are somehow persistent, we have this thread of experience that extends through time, is not obvious."
Then Wolfram offers a striking origin story for consciousness itself.
@stephen_wolfram suggests it traces back to a simple evolutionary pressure: the moment animals first needed to move.
"I kind of realized that probably when animals first existed in the history of life on Earth, that's when we started needing brains. If you're a thing that doesn't have to move around, the different parts of you can be doing different kinds of things. If you're an animal, then one thing you have to do is decide, are you going to go left or are you going to go right?"
That single binary choice, he argues, may be the seed of everything we now call awareness:
"I kind of think it's a little disappointing to feel that this whole wanted thing that ends up being what we think of as consciousness might have originated in just that very simple need to decide if you are an animal that can move. You have to take all that sensory input and you have to make a definitive decision about do you go this way or that way."
The takeaway is unsettling but clarifying.
If LLMs can produce complex behavior from simple rules, then consciousness may not be a mystical add-on to physics.
It may just be what happens when a layered enough system has to make a decision.
English

Elon Musk Faces Criminal Probe in France After Ignoring Summons in X Case - Jon Brodkin arstechnica.com/tech-policy/20…
English

How a New Breed of Hacking Tools is Forcing a White House Reset washingtonpost.com/technology/202…
English
Phil Frana retweetledi

VSU receives $1 million for new AI and cybersecurity center
virginiabusiness.com/vsu-receives-1…
English

Science, Promise, and Peril in the Age of AI quantamagazine.org/series/science…
English

Tech is turning increasingly to religion in a quest to create ethical AI - Krysta Fauria apnews.com/article/ai-art…
English

He Couldn’t Land a Job Interview. Was AI to Blame? - Todd Feathers wired.com/story/he-could…
English

The Cultural Divide between Mathematics and AI - Ralph Furman sugaku.net/content/unders…
English

Buddhist sect welcomes humanoid robot Gabi with precept ceremony koreatimes.co.kr/southkorea/soc…
English

We’re Entering Uncharted Territory for Math: Terence Tao, the world’s greatest living mathematician, has a vision for AI - Matteo Wong theatlantic.com/technology/arc…
English

Mathematics, word problems, common sense, and artificial intelligence - Ernest Davis arxiv.org/abs/2301.09723
English

The Tensor Product, Demystified - Tai-Danae Bradley math3ma.com/blog/the-tenso…
English

Mathematics in Movies people.math.harvard.edu/~knill/mathmov…
English




