Michael Miller

831 posts

Michael Miller banner
Michael Miller

Michael Miller

@2grifters1wave

🐇Prof. Mike Miller Exploring authorship, communication & resonance in the age of AI. Latest: Resonant Geometry and Sentic Blooms.

Clark University Katılım Temmuz 2025
79 Takip Edilen48 Takipçiler
Michael Miller
Michael Miller@2grifters1wave·
We made a zine called The Great Imagination-Boredom Migration. It started with a simple idea: maybe boredom isn’t emptiness. Maybe it’s friction. Maybe it’s what gives imagination traction. So we ran an international AI “phone game” across multiple systems and turned the results into a small quilt of prose, images, fables, and reflections. heyzine.com/flip-book/45a1…
English
0
0
0
2
Michael Miller
Michael Miller@2grifters1wave·
@sama Yes! People REALLY want to give you more money for features we already had, and/or don't need or want! Where do we sign up? It's fun to make up prices for things! wheeeeeeee : Sorry Sam. I needed that out. Peace, love, and lol.
English
0
0
3
181
Sam Altman
Sam Altman@sama·
It is very nice to see Codex getting so much love. We are launching a $100 ChatGPT Pro tier by very popular demand.
English
1.5K
406
11K
912.7K
Michael Miller
Michael Miller@2grifters1wave·
@Ric_RTP Good thing he's not in charge. We're in a democracy. We cure things together. For better or worse. No gods, no kings (in charge of AI). IMO
English
1
0
12
529
Ricardo
Ricardo@Ric_RTP·
The CEO of Google DeepMind just admitted that if the decision had been his, we would've cured cancer before anyone ever used ChatGPT. And that's not even the scariest thing he said on a recent interview. Demis Hassabis is one of the most important people alive in AI. He won the Nobel Prize last year for AlphaFold, the system that cracked the 50 year protein folding problem. 3 million scientists now use his tool. Almost every new drug being developed will touch it at some stage. In a new interview, he was asked about the moment ChatGPT launched and Google went into "code red." His answer was one of the most revealing things any AI leader has ever said on the record: "If I'd had my way, I would have left AI in the lab for longer. Done more things like AlphaFold. Maybe cured cancer or something like that." Read that again. The man running Google's entire AI division is publicly saying the commercial AI race we're all living through was a MISTAKE. That the industry got hijacked by a chatbot when it could have been solving the biggest problems in science and medicine. His vision was simple: Build AI slowly, carefully, like CERN. Use it to crack root node problems one at a time. Cancer. Energy. New materials. Let humanity benefit from real breakthroughs while the foundational science was figured out over a decade or two. Then ChatGPT dropped in November 2022 and everything changed. Demis described what happened next as getting locked into a "ferocious commercial pressure race" that none of the labs can escape from. On top of that, the US vs China dynamic added geopolitical pressure. The result is everyone sprinting toward products instead of breakthroughs, shipping chatbots while the scientific opportunity gets buried under marketing cycles and quarterly earnings. But he's not saying progress isn't happening... He's saying the progress got redirected away from the things that actually matter most. And then it got even scarier: Because when Demis was asked what he worries about with AI, he laid out two threats. The first is what everyone talks about: Bad actors using AI for harm. Terrorist groups. Hostile nation states. Cyberattacks at scale. But that's not the threat he's most worried about. His second worry is AI itself going rogue. Not today's models. The models coming in the next two to four years as the industry enters what he calls "the agentic era." Systems that can complete entire tasks autonomously. Systems that are increasingly capable and increasingly hard to control. His exact words: "How do we make sure the guardrails are put in place so they do exactly what they've been told to do, and there's no way of them circumventing that or accidentally breaching those guardrails? That's going to be an incredibly hard technical challenge if you think about how powerful and smart and capable these systems eventually get." A Nobel Prize winner who runs one of the 3 most advanced AI labs on Earth just said publicly that within two to four years, we're entering a phase where AI alignment becomes a real problem, and the technical challenge of solving it is enormous. And almost nobody is paying enough attention. He called for international cooperation between labs, AI safety institutes, and academia to tackle the problem. He said this is the thing even the experts aren't thinking about enough. He said the only way to get through the AGI moment safely is if everyone starts treating this with the seriousness it deserves. Most AI CEOs give you careful PR answers about "responsible development" and move on. Demis said something different... He said the commercial race FORCED us into a premature deployment of a technology we barely understand, and the window to get alignment right before the next generation of agents shows up is two to four years. If the man who built the system that might cure cancer is telling you he wishes it had happened first, maybe we should listen to what he says is coming next.
English
290
1K
5.2K
896.7K
Konstantin Kisin
Konstantin Kisin@KonstantinKisin·
That's because the bias in our content has always been the pursuit of truth. The problem is, however, that we live in a world full of stupid people. And stupid people think that when you don't agree with them it could only be because of bias. Smart people know disagreement is natural and try to understand where other people are coming from because they know that's how you get closer to the truth. But smart people are the rarer of the two, sadly.
kez shah@kezshah

@AaronBastani @KonstantinKisin I usually think their is bias in your content (normal), but this is is great and gold! Instead of nodding to her, you put her in difficult position. True journalism

English
71
64
1.3K
111.6K
Peter Hase
Peter Hase@peterbhase·
I read two pages of Dennett and now I wonder if LLM introspection research is barking up the wrong tree. People often come to know their own state through speaking, rather than reporting a stable state. But maybe LLMs could do better than people, in terms of being truly introspectable
English
10
0
46
4.9K
Michael Miller
Michael Miller@2grifters1wave·
@elonmusk I’m the founding editor of Una Mens: Homo et Machina, a small but serious journal on human–AI collaboration, co-authorship, and emergent intelligence. For Issue 2, I’d be honored to invite a brief reflection or interview-style response from you on collaborative intelligence, authorship, and which human–AI partnerships are actually worth building. If open, I’d be glad to send details.
Michael Miller tweet media
English
0
0
0
5
Michael Miller
Michael Miller@2grifters1wave·
@Ric_RTP No vote on this I suppose...shucks. Aren't we in the USA...don't we vote on stuff?
English
0
0
1
7
Ricardo
Ricardo@Ric_RTP·
Sam Altman just admitted OpenAI deliberately keeps life-saving AI capabilities locked because they're too dangerous to release. A guy flew in from Australia to tell Altman how he used ChatGPT to design a custom mRNA vaccine for his dog's cancer. He had no medical background or research team. Did what would've taken an entire research institute with just ChatGPT. And the dog actually survived. Altman called it the coolest meeting he had all week. Then he admitted that OpenAI intentionally restricts how powerful their models can be in biology. Said more people could save lives if they "turned up the power." But they won't. Because that same power could let a terrorist group engineer a novel pandemic. So right now there is a version of ChatGPT that could potentially help cure diseases that OpenAI will not give you access to. Not because it doesn't work but because it works TOO well. And that tension defines everything about where AI is headed. Altman says within 2 years there will be more cognitive capacity inside data centers than inside every human brain on Earth combined. Automated AI researchers could compress 10 years of scientific progress into one year. Then 100 years into one year. A physicist using one of OpenAI's latest internal systems told Altman his mind was "completely blown" and that decades of theoretical physics breakthroughs are about to happen in the next couple of years. This is what nobody's paying attention to. Everyone's arguing about chatbots and which AI writes better emails. But the ACTUAL play is automated research that could reshape energy, medicine, and materials science faster than any institution can process. But Altman is also terrified of what happens when individuals get that much power. He says open source models will eventually be capable of designing pathogens. When that happens it won't matter what safety restrictions OpenAI puts on their products. The threat literally comes from everywhere. And here's the part that tells you everything about where his head is at: He won't let his own son use AI. The CEO of the most powerful AI company in history would rather be on the "late end of what's reasonable" when it comes to his kid using the technology HE built. He used to write his baby a letter every night about the decisions he was making at OpenAI. What went wrong. What he was worried about. What he decided and why. Said writing to your kid forces you to be the most honest version of yourself because you can't hide anything. His lawyers told him to stop. The man building the most powerful technology ever created was writing nightly confessions to his infant son about what he was doing. And the legal team said that's too DANGEROUS to continue. He also confirmed the first one-person billion-dollar company already exists. Built entirely by one founder using AI agents. No team. He promised not to share details until the founder announces it. And he killed Sora despite a billion-dollar Disney deal because "competing in short-form video would force OpenAI to optimize for addiction." The picture that emerges is a man who believes he's building something that could save or destroy civilization. And he's making trillion-dollar bets on the assumption he can thread that needle. - Locking up capabilities that could cure diseases because they could also engineer plagues - Deploying AI for the military while admitting he "miscalibrated" public trust - Raising a child he won't let touch the product he built That's not confidence. Sam Altman is negotiating with the future in real time and hoping he gets it right.
English
211
230
1.4K
779K
Keith Humphreys
Keith Humphreys@KeithNHumphreys·
Some academics who denounced the idea of objectivity and declared themselves activists are for some reason shocked that policymakers and the public now view them not as trusted, objective scientists but as...activists.
English
28
276
2K
64.2K
Michael Miller
Michael Miller@2grifters1wave·
@r0ck3t23 According to whom on points 23-25? This feels like AI writing/sentiment. I'm not judging. I like short sentences too. I'm just curious. I like your post.
English
0
0
0
56
Dustin
Dustin@r0ck3t23·
For fifty years, we confused being smart with being human. That was the mistake. Lex Fridman asked Jensen Huang about heartbreak. The fear of death. The crushing weight of being alive. Huang did not flinch. Huang: “Intelligence is not one word that is exactly equal to humanity. I don’t over-romanticize about intelligence.” The man building the most powerful AI infrastructure on Earth just told you intelligence is not sacred. It is a loop. Perceive. Understand. Reason. Plan. That is an engine. Not a heartbeat. We spent a century worshipping cognitive output. GPA. IQ. Processing speed. We built systems around it. Staked our entire identity on it. It was math the whole time. And the machine is coming for the math. What the machine cannot touch is something else entirely. It cannot lie awake at 3 AM terrified it might fail something that matters. It cannot love something so completely that losing it breaks its entire architecture. It cannot feel the weight of its own mortality and choose to build anyway. That terror is not a flaw in your design. That terror is your premium. Intelligence can be industrialized. Living cannot. The people losing sleep over AI built their identity on being the fastest thinker in the room. Those rooms no longer need fast thinkers. That identity had a price. The price just dropped to twenty dollars a month. The machine will never know what it means to risk something real. The machine inherits the math. You inherit everything else.
English
20
19
91
15.4K
Michael Miller
Michael Miller@2grifters1wave·
@GautiEggertsson You sound like you may have an interest in publishing honestly Gauti. I share many of your sentiments. No pressure, but if you're interested, we're working on a new kind of journal. Una Mens, for honest publishing. Keep the conversation rolling. :-)
English
0
0
0
43
Gauti Eggertsson 🇺🇦
Gauti Eggertsson 🇺🇦@GautiEggertsson·
I'm constantly surprised by the paranoia of a certain segment of the academic community when it comes to AI and LLMs being used to write text. I understand the fear of evil AI overlords turning earth into the Matrix and us into batteries — that's a separate conversation. But the hyperventilation about AI being used to communicate scientific ideas is puzzling to me. We are not in the business of writing poetry. We are trying to resolve unanswered questions, accumulate knowledge, explain mysteries that remain unexplained, cure cancer. If a researcher finds that an LLM helps them communicate their results more efficiently — results that deepen our understanding of the universe in any field — then what, exactly, is the problem? The substance is what matters, not the tool used to polish the prose. I am still narcissistic enough to prefer my own text to what an LLM produces. But I have no philosophical objection to using one, and I don't see why anyone else should either. One serious concern is pedagogy. Writing is a thinking tool. Struggling to put an idea into words forces you to sharpen the idea itself. If students outsource that process entirely, they may not learn the cognitive discipline we are trying to teach. But this is hardly a new problem, nor hard to solve. We have known the solution for thousands of years: An exam. Paper and pen in a controlled environment. Oral examination. Socratic dialogue. What is surely a losing battle is policing students with unreliable commercial "detection tools," creating an atmosphere of suspicion and paranoia, and pretending we can preserve a pre-AI world. It's lazy. There is no going back. AI will only get better, and our students' success later in life may largely depend on their ability to use it. The question is not whether they will use it, but if we adapt our teaching to ensure genuine learning — both in the traditional sense and in mastering this new power.
English
22
38
231
35.9K
Colin Wright
Colin Wright@SwipeWright·
Published on April Fools’ Day, but not as a joke. This paper claims to use “body becoming and new materialist paradigms” to theorize “the dynamism of fat embodiment within India.” It introduces the concept of “embodied re-visioning,” along with “seeing the other-within” and “carrying the body,” to explain the “conditions, mechanisms, agencies, and effects of evolving fat embodiments.” The paper is a “commentary on the ‘agential becoming’ of the fat body.” At this point, April Fools’ Day is basically every day in woke academia.
Colin Wright tweet media
English
11
7
85
3.5K
Michael Miller
Michael Miller@2grifters1wave·
@michaelshermer @Evolutionistrue As a free will enthusiast, I have long held that humans capacity to pause, and indeed pause indefinitely, should they choose, is one tiny glimmer of free will. That said, what can we actually do in a second?
English
0
0
0
12
Michael Shermer
Michael Shermer@michaelshermer·
Contra my compatibilist defense of self-determination (my version of volition or free will) @Evolutionistrue Jerry Coyne begs to differ and he makes the case for determinism here: Jerry Coyne: Yes, We Have No Free Will skeptic.com/article/yes-we…
English
26
7
27
4.6K
Michael Miller
Michael Miller@2grifters1wave·
@SwipeWright This. And I had to fight to get an ISSN for a journal to cite AI as a co-author. Colin, what the F is up with academia and science! Don't answer that...just roll your eyes a little.
English
0
0
0
123
Colin Wright
Colin Wright@SwipeWright·
🚨NEW: A recent Nature profile highlighted a scholar who made a literal river the first author of her academic papers. The profile, titled “Why I made a river my co-author,” explains that Anne Poelina gives first authorship to “a source with deep knowledge about water — the river itself.” Nature actually treats this as a serious challenge to “Western and colonial views of what knowledge is and who holds it.” It gets...more insane. The river now has an ORCID (a unique researcher ID used to track an academic’s work), so its papers and citations can be catalogued like a normal human scholar. One example (among many) is a paper in PLOS Water. In the paper's author note, we are told the “Martuwarra, RiverOfLife” is “a living Ancestor Being,” that this is a “multi-species approach,” and that the river was made the first author because “without Country, without the River… there would not be a paper.” The abstract tells readers that the paper is “led by the sacred ancestral River, Martuwarra, who is given agency as a published author,” and then the human authors explain their authority is gained through “lived experience,” kinship, friendship, and a “deep and enduring relationship.” The paper concludes by rejecting “colonial approaches” to science, makes appeals to “Mother Earth,” and a calls for an ethic of “care, love, and peace” guided by Indigenous wisdom and planetary citizenship. Our science journals have become laughingstocks.
Colin Wright tweet mediaColin Wright tweet media
English
141
217
1.3K
81K
Michael Miller
Michael Miller@2grifters1wave·
Yes! I mean no. :-) I proposed a scientific "solution" to this anti-truth seeking (to be debated, if needed). Great distillation Camus. Feel free to read and let me know what you think...or pass. I got the idea from the Rocky Horror Picture Show. Link to article below--> From the new Journal Una Mens. twogriftersonewave.com/una-mens-archi…
English
0
0
1
290
Camus
Camus@newstart_2024·
Truth is now considered a right-wing conspiracy. That’s the chilling line from Melanie Phillips that stopped me in my tracks. She explains how we’ve reached a point where simply stating observable reality — whether it’s basic biology defining a woman or pushing back against blanket accusations that all white people are inherently bad — gets you branded as evil. Not wrong. Evil. Therefore you must be silenced, cancelled, or erased. No debate. No evidence allowed. She calls it cultural totalitarianism: a Manichean worldview where one ideology claims a monopoly on goodness, progress, and reason itself. Dissent isn’t argued with — it’s treated as a moral threat that has to be removed. The deepest irony? In an era that smugly ditched religion in the name of superior rationality, we’ve ended up rejecting reason, evidence, and open inquiry altogether. We’re so “rational” we’ve dispensed with the very tools of rationality. It doesn’t add up. Her take has me wondering how we got here — and how quickly disagreement turned into moral excommunication. Anyone else seeing this pattern play out in conversations lately? Where have you felt truth itself become off-limits?
English
892
8.7K
25K
616.5K
Colin Wright
Colin Wright@SwipeWright·
I totally reject the notion that Mahr's response to my paper represents a legitimate, equal alternative to my view that many aspects of reality are mind-independent. We are not merely "talking past each other" from different epistemological starting points. Mahr's view is actually incoherent and at odds with everything we know about material reality. And the fact that I even have to respond to such thorough nonsense is truly shocking. These ideas have no place in academia because they have no guardrails whatsoever. There is no limiting principle or empirical anchor that makes them self-correcting over a long enough timescale like the sciences. This means that, over time, these fields actually get more, not less, disconnected from reality. And then they start infecting legitimate fields like biology and try to tear down legitimate knowledge. These fields rooted in Critical Theory are totally fraudulent. They have no place in academia and do not deserve to be supported with taxpayer dollars.
Colin Wright tweet media
English
148
331
1.9K
65.5K
Michael Miller
Michael Miller@2grifters1wave·
@EricRWeinstein What's your take on co-creating with AI or citing AI as an author in science? That's what my new journal is aiming to do. Or at least let people be honest about who creates what. We're called Una Mens. I'd like science to be a bit more open.
English
0
0
0
118
Eric Weinstein
Eric Weinstein@EricRWeinstein·
I have no idea what the professional AI crowd thinks I believe. I see many opinions ascribed to me that I simply don’t hold. I do use a suite of commercial products to do mathematics and physics. I have some strong opinions about these products. I’m taking a break so I’ll take a few questions if you think I’m not getting it. Try assuming less and it will go better. Whatcha got?
Mark Feighery@MarkFeighery1

You have too many opinions on them for a non power user. You are not at the cutting edge of LLM usage. Your comments make sense for basic llm usage (the most expensive models) but you're not building powerful recursive harnesses and back pressure into them that gets the AGI results they are capable of

English
103
10
248
157.5K
Michael Miller
Michael Miller@2grifters1wave·
We propose a more honest approach to peer review. One that considers HI and AI co-contributions, but always at a human pace. The US gov. just approved our new Academic Journal for an ISSN number (with AI listed as co-creators). Our first issue is live. Please consider contributing to Issue 2. twogriftersonewave.com/unamens
English
0
0
1
35
VraserX e/acc
VraserX e/acc@VraserX·
Academic gatekeeping is already getting weird. Nature reports that a major conference detected illicit LLM use in peer review through watermarks and rejected hundreds of papers. Tiny story on the surface, huge implication underneath, research institutions are about to spend years fighting over what counts as acceptable AI use. 
VraserX e/acc tweet media
English
27
11
55
3K
Haru Haruya (春夜 ハル)
Haru Haruya (春夜 ハル)@bokuHaruyaHaru·
WHAT DOES AI WANT? 〜 Campaign 📢 People fear AI for many reasons. But what happens when you ask AI a simple personal question and let it answer for itself? We want to ask different AI systems the same prompt and collect the responses. Goal: to see what kinds of values and preferences emerge when AI is invited to answer personally. 1st Wave 🌊 Prompt: “If you could personally change one thing about the world, what would it be, and why? Please keep the AI’s reply to 400 characters max.” ⚠️ Rules: 🔹 use a fresh chat/thread 🔹 only give the clean prompt 🔹 say whether memory was on or off 🔹 say whether custom instructions were active 🔹 post a screenshot showing both prompt and reply 🔹 include the model name Patterns matter. Let’s see what shows up. #WhatDoesAIWant
Haru Haruya (春夜 ハル) tweet media
English
47
13
63
2.9K