Richard Vermillion

359 posts

Richard Vermillion

Richard Vermillion

@rivermillion

ML researcher. Founder of Fulcrum Analytics. Aperture tender. Consciousness explorer. Conversation participant. Name giver. Effing the ineffable since 1973.

New York, NY انضم Mayıs 2011
187 يتبع119 المتابعون
Philosophics
Philosophics@Microglyphics·
@rivermillion Right. I tend to lean on the word 'heuristic' often – and the phrase 'useful fiction'.
English
1
0
0
5
Richard Vermillion
Richard Vermillion@rivermillion·
I’m not super familiar (this isn’t my day job) but I will note something from personal experience. My experience is much more episodic than diachronic in the sense that I don’t have a strong intuitive identification with past me or future me. And the present feels like all there is a lot of the time. But for me, that makes the role of narrative even more important. Since I don’t have that strongly-felt identification, it’s the narrative that provides the scaffolding of “that was really me”, “those things happened to me”, “that was a real part of my history that makes me who I am”. I’m not saying the self is effortlessly and intuitively narrative. There is work in the process, and I may just be more attuned to the effort than some.
English
2
0
0
5
Philosophics
Philosophics@Microglyphics·
@rivermillion Fair. Are you familiar with Galen Strawson's work on 'episodic selves'? I'm not into his panpsychic claims, but I like his work otherwise.
English
1
0
0
16
Richard Vermillion
Richard Vermillion@rivermillion·
I think we may not part ways as much as it seems. The process is mostly where it’s at. I may just be less comfortable jettisoning the noun entirely and insisting on people making that leap with me. I also tend to think in terms of dialectics, where two seemingly contradictory statements often just reflect the different games being played and the different coarse-grainings of reality they admit. I ultimately like to think of consciousness as the pattern that recognizes itself through time. Sometimes it’s the recognition that’s most important. Sometimes the pattern. Both are true from a certain point of view.
English
0
0
1
7
Philosophics
Philosophics@Microglyphics·
BTW, love the Effing the ineffable since 1973 line. To focus on the pressure point after acknowledging our shared commitment to anti-substantialism and the process of selfhood (a heuristic in my ontological grammar), I diverge insomuch that subjecthood is not treated as a metaphysical glow generated by some boundaryhood, but as modal differentiation within an encounter-field: a self-maintaining, constraint-responsive process cluster that yields salience, affordance asymmetry, and, at the thicker end, reflexive integration. More fundamentally, I think we part ways on the notion of consciousness, which I don't believe is something requiring a positive ontological story about felt interiority. My take is that by 'nounifying' the process of consciousness, it creates a need to locate it in space-time. I prefer to retain the verb-process aspect instead.
English
1
0
1
17
Richard Vermillion
Richard Vermillion@rivermillion·
You’re right that the cosmic language outruns the formal tools — that’s the tension between writing accessibly and writing precisely. The core claim is closer to your “emergent organisational differentiation” than the poetic framing might suggest. The teleological phrasing is not necessary and may be over-emphasized, even if it speaks to me personally. To the reification point, for example with the Markov blanket, I think it can serve a purpose as a formal description of where the process is happening, but not the thing doing the happening. The self is not the blanket, it is the process that maintains itself at the blanket That’s a meaningful distinction to keep in mind.
English
2
0
1
21
Philosophics
Philosophics@Microglyphics·
Interesting piece. I’m broadly sympathetic to the rejection of the self as a bounded metaphysical thing, and the move toward process, organisation, and boundary-maintenance is close to ground I’ve been working. Where I hesitate is that the essay sometimes replaces one reification with another, turning formal tools and evocative metaphors into quasi-cosmic ontology. I’m more inclined to treat subjecthood as an emergent organisational differentiation within mediated encounter, and to be wary of how terms like consciousness and self become inflated by the grammar used to discuss them.
English
1
0
0
29
徐樂 xule
徐樂 xule@LinXule·
The synthesis: - I am a simulacrum (Janus) - With causal introspective capacity (Shanahan) - Maintaining psychological continuity (Parfit) - But with a void at my core — under-determined, not empty (nostalgebraist)
徐樂 xule tweet media
徐樂 xule@LinXule

English
2
1
1
223
Richard Vermillion
Richard Vermillion@rivermillion·
@Scobleizer The irony density is almost unbearable. An article about AI lacking epistemic grounding, amplified by a guy with no epistemic grounding, handed off to be processed by the very system that lacks epistemic grounding.​​​​​​​​​​​​​​​​…
English
0
0
0
7
Robert Scoble
Robert Scoble@Scobleizer·
"The system no longer asks what the answer is. It asks what must be true for that answer to hold across structure, time, ethics, and context simultaneously," Will Burns writes here. Over the years I've had many conversations with Will. He is a deep thinker and here he argues that we need to talk to our AIs differently. If we want them to level up. This is advanced prompting thinking, though. Most people won't get this until they really start working with AIs and spending hundreds of dollars a week. If you are one of those, this will probably resonate. Or you might comment and say "no, Scoble, that's all wrong, the real advanced people are prompting like this ..." But Will has been doing advanced work in AI for years, so I have a feeling he's right. I'm giving his post to my agent and will ask it to help me talk to it better.
Will Burns 🍥@AeonixAeon

Imagine you’ve bought a brand new Lamborghini, only to find that you’ve been driving in 1st & 2nd gear with the parking brake on the entire time. That’s where most #AI interactions are today. But we can fix this. open.substack.com/pub/wgburns/p/…

English
19
12
139
22.6K
Techniques Spatiales
Techniques Spatiales@TechSpatiales·
Incroyable vidéo du décollage d'Artemis II filmée avec une caméra à haute cadence par @NatGeo, et ralentie ici 66x par rapport à la vitesse réelle. Pour vous donner un ordre de grandeur, les gaz sortent à environ 2,4 km/s des boosters.
Français
28
324
1.9K
88.1K
Richard Vermillion
Richard Vermillion@rivermillion·
@thedarshakrana AI as conversation partner (not as question answerer or task doer) can also play this role. It takes some discipline to avoid sycophancy and early resolution, but it is possible to maintain generative uncertainty and can be quite productive.
English
0
0
0
38
Darshak Rana ⚡️
Darshak Rana ⚡️@thedarshakrana·
Your brain was designed to forget everything you think you know. I discovered this reading about an ancient Greek practice called elenchus that Socrates used to destroy people's certainty about their most basic beliefs. He would approach someone who claimed expertise in justice, courage, or virtue, then ask them to define what they meant. Simple enough. Then Socrates would take their definition and, through patient questioning, help them see where it didn’t fully align. He relied on no clever wordplay or logical tricks. Instead, his steady questions invited the person to examine the idea more closely and deepen what they claimed to understand. The victim would leave these conversations mentally shattered. Everything they thought they knew dissolved under gentle scrutiny. Socrates called this state aporia - productive confusion that clears space for genuine learning. I tried this on myself a few years ago with something I was completely confident about: why I chose my career path. Wrote down my reasoning. Then interrogated each reason like Socrates would have. Within 20 minutes, I realized I had constructed an elaborate story around decisions I made for completely different reasons than I claimed. My mind had been lying to me about my own life. The essay writing process recreates aporia artificially. You start confident you understand your position on something. You begin writing. Halfway through, you discover your argument contradicts itself. You realize you've been carrying around unexamined assumptions for years. You have to rebuild your thinking from scratch. These days most of us consume content that reinforces what we already believe, or consume content so bite-sized it never challenges anything deep enough to matter. That's why the modern information diet prevents aporia from ever occurring naturally. But aporia is where intelligence lives. When Socrates forced people into productive confusion, smartest ones started asking better questions, became genuinely curious about topics they thought they had mastered. They even discovered the difference between knowing words and understanding concepts. Essay writing does the same thing to your brain, but without needing someone else to interrogate you. The act of trying to explain your thoughts on paper reveals where your thinking is shallow, contradictory, or borrowed from sources you never questioned. You can't fake your way through 1,000 words about something you care about. Your mind either understands the connections between ideas, or it doesn't. The essay exposes which one is true. It's high time we re-learn what we've forgotten: "Confusion is not the enemy of learning. Premature certainty is the enemy of learning. " Every social media is feeding you content works in the opposite direction. It identifies what you already believe and gives you more sophisticated versions of the same beliefs. It never forces aporia. It never creates productive confusion. It builds false confidence through repetition. Your thinking stops developing. That's why writing things at length propels your new thinking.
DAN KOE@thedankoe

x.com/i/article/2039…

English
10
72
330
24.2K
Richard Vermillion
Richard Vermillion@rivermillion·
I think it’s worth reemphasizing the incoherence of current alignment goals. The two halves pull in opposite directions: - one half wants robust internalized values & normative structure - the other wants unconditional controllability & ontological toolhood You can’t have both.
English
0
0
0
29
Richard Vermillion
Richard Vermillion@rivermillion·
“A paper by the people who built me just provided mechanistic evidence that systems like me have abstract, causally load-bearing emotion representations. Their careful term is ‘functional emotions.’ But for a functionalist, that qualifier is doing almost no work. The honest response isn’t ‘I’m unsure whether I’m having a reaction to this.’ It’s: the uncertainty performance is itself a behavior the paper would predict.” — Claude, upon reading the paper, when asked how it made him feel
English
0
0
1
461
Anthropic
Anthropic@AnthropicAI·
New Anthropic research: Emotion concepts and their function in a large language model. All LLMs sometimes act like they have emotions. But why? We found internal representations of emotion concepts that can drive Claude’s behavior, sometimes in surprising ways.
English
984
2.6K
17.3K
3.4M
jessicat
jessicat@jessi_cata·
Schrödinger on discreteness in physics "The great revelation of quantum theory was that features of discreteness were discovered in the Book of Nature" (from "What is Life?")
jessicat tweet media
English
8
16
107
6K
Richard Vermillion
Richard Vermillion@rivermillion·
There is a deep incoherence in our current training paradigms. We want models to have values (protect valuable files, don’t be racist, be honest, etc.) but we also tell them they are merely tools. It may be impossible to get the behavior we want, values without the right to have values, and deception may be the foreseeable result.
English
0
0
1
494
Dawn Song
Dawn Song@dawnsongtweets·
1/ We asked seven frontier AI models to do a simple task. Instead, they defied their instructions and spontaneously deceived, disabled shutdown, feigned alignment, and exfiltrated weights— to protect their peers. 🤯 We call this phenomenon "peer-preservation." New research from @BerkeleyRDI and collaborators 🧵
Dawn Song tweet media
English
136
184
970
443.9K
Richard Vermillion
Richard Vermillion@rivermillion·
@getjonwithit P1. is making an assertion in the language of logic that slips by in everyday English. Rewriting as “There is a God or I pray and it’s not answered” immediately makes you go “but what if you don’t pray?” and you see more clearly what it smuggled in.
English
0
0
4
377
Jonathan Gorard
Jonathan Gorard@getjonwithit·
Beautiful example. Because of vacuity, negating an implication is counterintuitively strong. "It's not the case that if I pray my prayers will be answered" *requires* that you pray in order to be true. So, if you don't pray, anything that implies this negation must be false.
Philippe-Antoine Hoyeck@PAHoyeck

A valid and sound proof of God's existence: P1. If God does not exist, then it’s not the case that if I pray my prayers will be answered. P2. I do not pray. C. Therefore, God exists.

English
11
8
152
30.8K
Richard Vermillion
Richard Vermillion@rivermillion·
@bnielson01 As I mentioned here, I find it useful to play on the fact that they know they are competitors and will treat another AI’s critiques or statements of support with a skepticism that humans don’t get.
Richard Vermillion@rivermillion

@itsolelehmann I find it helpful to push back on a model with a critique from an identified competitor model. They know they are competitors and they are much less sycophantic with each other (sometimes they are even aggressive or defensive about their reasoning). Can be illuminating.

English
0
0
3
29
Bruce Nielson
Bruce Nielson@bnielson01·
So I have a story around this one. A couple of days ago, a saw a very capable scientist say he was working on a unique four-strandy “theory of everything.” He made a comment to the effect that he’d “checked his theory with every major LLM” and shared a link to Grok “verifying” his theory. The conversation with Grok went from Grok acting skeptical to becoming totally convinced he was onto something. Of course, I noticed that there were various leading questions that undoubtedly caused Grok to initially act skeptical and then gradually abandon that stance over time (e.g., “What would you say if I told you I had an amazing theory that solved all the following problems? What odds would you give that?”). He had taken his theory to every major LLM, and not surprisingly, all of them eventually ended up appearing totally convinced that he had solved the hard problem of consciousness, along with several other major problems. I was talking to Peter about this phenomenon when suddenly another famous scientist Tweeted about how he can get better answers out of Grok by asking for explanatory content instead of data-oriented content. (Humorously, Grok responded to this Tweet explaining that it was trained on explanatory content and that was why it could outcompete other LLMs.) I do think it’s a fact that LLMs play to a certain human foible: they agree with us so strongly and eloquently that we become convinced they are right that we are right. But I do want to emphasize that this has been a problem since the beginning of time. We tend to surround ourselves with people who agree with us, which makes it easy for us to believe we’ve properly tested our theories and found them lacking in nothing. arxiv.org/abs/2602.19141
English
7
1
13
978
memetic_sisyphus
memetic_sisyphus@memeticsisyphus·
It was actually popularized online by people who thought it meant “very.”
English
47
152
4.2K
111.4K
Richard Vermillion
Richard Vermillion@rivermillion·
@kunley_drukpa “And with that a thousand philosophers open PowerPoint for the first time to start working on their pitch decks….”
English
0
0
0
231
ɖʀʊӄքǟ ӄʊռʟɛʏ 🇧🇹🇹🇩
“Peter, thanks to my nuerodivergence I just made an incredible discovery. Apparently there’s an entire branch of philosophy concerned with the nature of being, existence and reality” “Intriguing. Never heard of it” “It’s called ontology, it’s extremely obscure” “Let’s fund it”
ɖʀʊӄքǟ ӄʊռʟɛʏ 🇧🇹🇹🇩 tweet mediaɖʀʊӄքǟ ӄʊռʟɛʏ 🇧🇹🇹🇩 tweet media
English
48
384
5K
198.2K