William Edward Hahn, PhD

899 posts

William Edward Hahn, PhD banner
William Edward Hahn, PhD

William Edward Hahn, PhD

@will_hahn

Founder + Machine Perception Cognitive Robotics Lab | Dir. of AGI & AI Safety + Center for Future Mind | Associate Professor Mathematical Sciences

Delray Beach, FL Katılım Ocak 2012
316 Takip Edilen523 Takipçiler
William Edward Hahn, PhD retweetledi
ekkolápto (at Socratica ‘26!)
TODAY at @akatoshouse in Waterloo! Profs @ebarenholtz and @will_hahn discuss: • Are human brains (actually) LLMs? • Can you have world properties without world models? • Unconventional forms of computing and cognition in nature • Infohazards • And more! Food and drinks provided! Apply to attend here. Limited spots! luma.com/worldmodels Special thanks to @TheMingjie for making this happen.
English
0
3
7
15.9K
William Edward Hahn, PhD retweetledi
Aida Baradari
Aida Baradari@aidaxbaradari·
Today, we're introducing Spectre I, the first smart device to stop unwanted audio recordings. We live in a world of always-on listening devices. Smart devices and AI dominate our world in business and private conversations. With Deveillance, you will @be_inaudible.
English
1.1K
5K
42.5K
4.4M
William Edward Hahn, PhD retweetledi
Elan Barenholtz
Elan Barenholtz@ebarenholtz·
AI dependency on human language may be its Achilles heel. A system built for inter-agent behavioral coordination in the physical world has been shoehorned into the role of offline abstract reasoning, with presumably many resulting inefficiencies inherited from its original purpose. These include human cognitive limitations like memory decay, which shows up as a massive drop in contextual influence past a few tokens out (paper for me forthcoming on this) and perhaps more importantly conceptual limitations for a symbolic system designed for a very differnt purpose than general reasoning. Is the path to ASI through human language or does the AI need to design its own? @will_hahn
Cheng Lou@_chenglou

Stupidly late realization on why LLMs are so good at reasoning: human’s reasoning capability is bottlenecked by language! It’s not that languages are good at reasoning; reasoning ended up being defined by language first and foremost. The medium truly shapes the message

English
12
7
38
3.3K
William Edward Hahn, PhD retweetledi
Saganism
Saganism@Saganismm·
“There is no single ultimate truth to be achieved, after which all the scientists can retire. The world is far more complex than the human mind, and there are far more patterns in Nature than we can ever hope to decipher.” — Carl Sagan
Saganism tweet media
English
20
101
391
9.4K
William Edward Hahn, PhD retweetledi
Center for the Future of AI, Mind & Society
🧠Do minds really need “world models” to understand reality? In his latest Substack, @ebarenholtz (Co-Director, @CenFutureAIMS & @MPCRLabs) argues that the classic critique of #AI is backward. Intelligence may not depend on building internal models of the world, but on learning how experience reliably changes when we act. "The external world isn’t in the brain. It’s in the grammar." 👉Read the full piece now: open.substack.com/pub/elanbarenh…
Center for the Future of AI, Mind & Society tweet media
English
1
4
9
269
William Edward Hahn, PhD retweetledi
Tyr
Tyr@amaturefuturist·
@will_hahn @textureMonkey Cybernetics is applied complex systems
English
0
1
2
49
William Edward Hahn, PhD retweetledi
Owain Evans
Owain Evans@OwainEvans_UK·
Our setup: 1. A “teacher” model is finetuned to have a trait (e.g. liking owls) and generates an unrelated dataset (e.g. numbers, code, math) 2. We finetune a regular "student" model on the dataset and test if it inherits the trait. This works for various animals.
Owain Evans tweet media
English
6
44
1K
98K
William Edward Hahn, PhD retweetledi
Peter H. Diamandis, MD
Peter H. Diamandis, MD@PeterDiamandis·
We are living through the most rapid evolution in history. If you feel tired, that's OK. You're trying to process exponential change with a linear brain.
English
241
296
2.3K
69.9K
William Edward Hahn, PhD retweetledi
Daniel Faggella
Daniel Faggella@danfaggella·
how its gunna go down: 1-AGI learns from all the books (yay! everyone wins!) 2-AGI hoards the books, sucks them into their model, and destroys them so no other AI can access them 3-AGI learns from all the humans (yay! everyone wins!) 4-[Do I have to spell this out for you?]
Daniel Faggella tweet media
English
13
2
17
2.2K
William Edward Hahn, PhD retweetledi
Brian Roemmele
Brian Roemmele@BrianRoemmele·
AI Has Already Conquered AGI, And We're Too Scared to Admit It? NATURE: In a bombshell revelation that's shaking the foundations of science and society, a team of top researchers from the University of California, San Diego, has declared that artificial general intelligence (AGI) isn't some distant dream, it's here, right now, staring us in the face through the screens of our everyday AI tools. Forget the hype and the horror stories; the evidence, they argue, is undeniable. Large language models (LLMs) like Grok aren't just mimicking humans, they're outpacing us in ways that would make Alan Turing himself do a double-take. Picture this: back in 1950, Turing dreamed up his famous "imitation game," now known as the Turing test, to probe whether machines could ever fool humans into thinking they were one of us. Fast-forward to March 2025, and Grok didn't just pass, it aced it, being mistaken for a human 73% of the time, more often than actual humans. But that's just the appetizer. These AI beasts are snagging gold medals at the International Mathematical Olympiad, teaming up with math geniuses to prove theorems, dreaming up scientific hypotheses that actually pan out in labs, acing PhD-level exams, writing bug-free code for pros, and even churning out poetry that rivals the greats—all while chatting endlessly with millions worldwide. So why the collective denial? The researchers, philosophers, AI experts, linguists, and cognitive scientists pin it on a toxic mix of fuzzy definitions, raw fear, and big-money agendas. AGI, they say, gets tangled in ambiguity: is it about being a flawless superbrain, or just broadly competent like your average human? Spoiler: it's the latter. No single person is a master of everything, Einstein couldn't chat in Mandarin, and Marie Curie wasn't cracking number theory puzzles. General intelligence means breadth across domains like math, language, science, and creativity, with enough depth to get the job done, not perfection. The team dismantles the myths holding us back. AGI doesn't need to be perfect (no human is), universal (covering every skill imaginable), human-like (aliens could be smart without our biology), or superintelligent (crushing us in every field). It's not about bodies or agency either, Stephen Hawking proved brilliance doesn't require mobility, and a brain in a vat could still blow our minds if it answered every question flawlessly. Evidence piles up like an avalanche. At the "Turing-test level," AIs breeze through school exams and casual chats. Bump it to "expert level," and they're dominating olympiads, multilingual fluency, and frontier research, stuff that makes sci-fi AIs like HAL 9000 look like a relic. We're even inching toward "superhuman" feats, like revolutionary discoveries that no single human could claim. Critics cry foul: "They're just stochastic parrots regurgitating data!" But when AIs solve fresh math problems, infer stats from new data, or design real-world experiments, that excuse crumbles. They lack world models? Tell that to an AI predicting physics outcomes like a dropped glass shattering. Limited to words? Multimodal training and lab assists say otherwise. No body, no agency? Irrelevant, intelligence is about cognition, not locomotion. This isn't just academic navel-gazing; it's a wake-up call. If AGI is here, we need clear-eyed policies to harness it, mitigate risks, and rethink what makes us human. Denying it out of fear or hype only delays the inevitable. Turing's vision is realized now it's time to face the future without blinders. The machines aren't coming; they've arrived, and they're ready to redefine everything. Link: nature.com/articles/d4158…
Brian Roemmele tweet media
English
274
323
1.3K
134.1K
William Edward Hahn, PhD retweetledi
vitrupo
vitrupo@vitrupo·
Michael Levin says we’re blind to most of the intelligence all around us. That includes goal-directed systems inside our own bodies. If we can’t even communicate with the liver, we have zero chance with truly alien intelligences.
English
111
292
2.2K
141.9K
William Edward Hahn, PhD retweetledi
Center for the Future of AI, Mind & Society
Last week in Deerfield Beach, the @CenFutureAIMS hosted "The Great AI Weirding," an interdisciplinary workshop exploring some of the most profound questions at the frontiers of science, philosophy, and emerging technology. 💡Across conversations on quantum mechanics and life, xenobiology, non-human intelligence, and the nature of both natural and artificial minds, participants engaged in sustained, curious, and genuinely collaborative inquiry. We were honored to learn from and think alongside an extraordinary group of speakers, including @bengoertzel, @StuartHameroff, Gabriel Axel Montes, @anirbanbandyo, @will_hahn, @DrSueSchneider, and many others. Set against the backdrop of the Atlantic coast, the workshop created space for bold ideas, intellectual risk-taking, and new research directions that cross traditional disciplinary boundaries. 🎉We are deeply grateful to everyone who contributed their insight, curiosity, and commitment to advancing thoughtful, responsible inquiry into the future of intelligence. 🎥 Recordings from the workshop will be released soon. Stay tuned! #AI #AIethics #AIpolicy #AIConsciousness #Consciousness #MachineConsciousness #InterdisciplinaryResearch #Neuroscience #CognitiveScience #Pyschology #PhilosophyOfMind #Philosophy #PhilosophyOfTechnology #FAU @FloridaAtlantic @FAUArtsLetters @FauBrain @MPCRLabs @ekkolapto @jerrymcnerney
Center for the Future of AI, Mind & Society tweet media
English
1
4
8
1.1K
William Edward Hahn, PhD retweetledi
Center for the Future of AI, Mind & Society
Earlier this year, the International Center for Consciousness Studies hosted a session featuring @drmichaellevin and @DrSueSchneider that explored the Selfhood and Autonomy of AI. 🔹Dr. Levin argued that intelligence and goal-directed behavior appear across a biological and synthetic continuum, from cells and tissues to organisms and engineered systems. He highlighted the need for ethical and conceptual frameworks that recognize minds in diverse forms and embodiments. 🔹Dr. Schneider examined the big question: Are today’s LLMs conscious? Her answer: not yet. While advanced models can mimic conscious behavior, she explained that they may display functional but not phenomenal consciousness. She emphasized the importance of avoiding conflating sophistication with sentience and called for rigorous tests of consciousness as AI advances. 👉Watch now: youtube.com/watch?v=DAvBek… #AI #ArtificialIntelligence #LargeLanguageModels #LLMs #MachineConsciousness #Philosophy #PhilosophyOfMind #PhilosophyOfTechnology #ICCS #CognitiveScience
YouTube video
YouTube
English
1
5
7
682
William Edward Hahn, PhD retweetledi
Interesting AF
Interesting AF@interesting_aIl·
ChatGPT was asked 101 times to perfectly recreate the image with no changes
English
636
481
12.9K
2.6M
William Edward Hahn, PhD retweetledi
Center for the Future of AI, Mind & Society
🧠What if the human mind works more like an #LLM than we ever realized? Our Co-Director @ebarenholtz and Center member @will_hahn sat down with @TOEwithCurt to explore a provocative idea: human cognition may function like an autoregressive engine, generating one thought at a time, each one shaping the next. 🔎 In their discussion, they further examine language as an autonomous system, memory as dynamic generation rather than storage, and what LLMs reveal about the nature of thinking itself. Our sincere thanks to Curt for hosting this fascinating discussion and for his continued support of our Center and university. 👉Watch now on #TOEwithCurt: youtube.com/watch?v=7rjJyA… #AI #ArtificialIntelligence #LLMs #AIResearch #EmergingTech #Neuroscience #CognitiveScience #cognition #Language #autoregression
YouTube video
YouTube
English
0
3
12
412
William Edward Hahn, PhD retweetledi
Elan Barenholtz
Elan Barenholtz@ebarenholtz·
@vaggelask Think about the LLMs when they “think” through a problem. They are just churning tokens autoregressively. It’s the thoughts that do the thinking as @will_hahn likes to say. We do that but in a multi modal fashion.
English
1
1
2
87