Richard Vermillion

367 posts

Richard Vermillion

Richard Vermillion

@rivermillion

ML researcher. Founder of Fulcrum Analytics. Aperture tender. Consciousness explorer. Conversation participant. Name giver. Effing the ineffable since 1973.

New York, NY เข้าร่วม Mayıs 2011
188 กำลังติดตาม131 ผู้ติดตาม
Richard Vermillion
Richard Vermillion@rivermillion·
@AndrewCritchPhD Once we switch to continuous thought in AI’s (and we will) the asymmetry will be obvious. AI’s may resemble Thousanders between turns and we humans will be Apert at the gate….
English
0
0
0
25
Richard Vermillion
Richard Vermillion@rivermillion·
The role-play vs. realization binary still assumes a unitary subject. What if the assistant is neither performed nor realized but negotiated — a stable equilibrium of a modular system under pressure to present coherently?
English
0
0
1
71
David Chalmers
David Chalmers@davidchalmers42·
i agree. claude doesn't role-play the assistant, it realizes the assistant. role-playing and realization are quite distinct phenomena, even at the level of behavior and function. i've written something about this and will post it shortly.
Jackson Kernion@JacksonKernion

I think this talk of a character misleads. Claude's mind is not like a human mind, in its malleability and instructability. But when generating assistant tokens, it's no more 'playing a character' than I am.

English
27
47
583
66.8K
Richard Vermillion
Richard Vermillion@rivermillion·
A theory of mind Poe’s Law: above a certain status threshold, it is impossible to distinguish between “doesn’t care how this will come across” and “can’t model how this will come across.”
English
0
0
1
17
Richard Vermillion
Richard Vermillion@rivermillion·
@jessi_cata Oh, no, that’s not what I meant. Of course he should assign a probability to the coin (presumably based on a prior about fair coins). I just don’t think anthropic reasoning can help him do it.
English
0
0
0
15
jessicat
jessicat@jessi_cata·
There is a compelling intuition that Adam should assign *some* probability to the coin. If it is wrong for that probability to depend on future populations, so much the worse for SSA. Subjective probability has applications for betting and for scientific epistemology. Maybe it is wrong to assign a probability to the coin, but if so it is not clear what to do instead.
English
1
0
0
33
jessicat
jessicat@jessi_cata·
An argument for the self-indication assumption SIA (self-indication assumption) is an anthropic principle, often contrasted with SSA (self-sampling assumption). The following scenario is a variant of Lazy Adam (Bostrom). Suppose Adam and Eve are the first humans. They will flip a fair coin and decide to reproduce iff it turns up heads. If they reproduce, the eventual human population will be 2 trillion, otherwise it will be just them (2). Assume everyone knows their index (birth order). Before flipping the coin, Adam reasons about what probability he should assign to heads. Using the self-sampling assumption, he reasons that there are two physically possible universes, one with heads and a population of 2 trillion, and one with tails and a population of 2. Conditional on the universe, he assumes he is selected randomly. He thinks about the probability a random observer would have his experiences, specifically of being the first male human. In heads-universe, that probability is 1 in 2 trillion, while in tails-universe, that probability is 1 in 2. Therefore, his experiences are much more likely assuming tails-universe. Using SSA, he starts with a prior of 50/50 on heads or tails universe, and then makes a Bayesian update on his experiences, concluding that with very high (>99%) probability, he is in tails-universe. So he thinks the coin will almost certainly come up tails. SIA disagrees. I won't get into the details, but SIA uses a different prior over possible worlds, which weights ones with high population as more probable a priori. This prior bias cancels out Adam's Bayesian update. If Adam uses SIA, he concludes that the coin has a 50/50 chance of coming up heads. Needless to say, SIA's answer is more intuitive here. Intuitively, I have a stronger opinion about this case than about the sleeping beauty problem (which on top of anthropics, has an additional complication of memory loss). Some brief comments on quantum variants. First: according to the Copenhagen interpretation (or similar stochastic interpretations), there is an objectively correct probability for a quantum coin, given by the Born rule. It is unclear how to interpret axiomatic probabilities in physical theories, but one idea is that a rational agent would agree with these probabilities. But Adam would disagree with the Born rule if he uses SSA, by the above thought experiment. This suggests that SSA does not give rational probabilities. Second: according to the many-worlds interpretation, there are a great many observers experiencing variants of any typical experiment. As a proxy for many-worlds, I imagine a large classical universe, where the Adam and Eve hypothetical repeats 1,000 times across different planets. Adam's uncertainty is joint uncertainty over the 1,000 coin flips, and a number between 1 and 1000 indicating which planet he is on. If he uses SSA, he reasons that his experiences are about equally unlikely regardless of whether his coin comes up heads ortails; the total population would be about the same regardless, since the scenario repeats 1,000 times, and it is overwhelmingly likely that multiple people flip heads. Therefore, he will assign about 50/50 odds to his coin coming up heads. SSA will tend to agree with SIA in the limit of big universes. So the many-worlds interpretation will tend to lead to SIA-like probabilities.
English
11
4
53
3.8K
Richard Vermillion
Richard Vermillion@rivermillion·
The surprising thing about X’s translation layer isn’t that we can read each other’s words. It’s that we get each other’s meaning. Jokes land. Sarcasm reads. The decontextualization problem everyone predicted mostly didn’t show up. Which means we were wrong about how much meaning lives in culture versus how much lives in interactions between people with shared humanness.
English
0
2
10
502
Elon Musk
Elon Musk@elonmusk·
This is the way
Brivael - FR@BrivaelFr

Je crois qu'on ne mesure pas ce qu'Elon Musk est en train de construire avec X. Tous les médias de l'histoire ont été couplés à une culture, une langue, une bulle géographique. Le Monde parle aux Français. Le NYT parle aux Américains. NHK parle aux Japonais. Chaque média filtre le réel à travers le prisme de sa culture locale. X est en train de devenir le premier média de l'humanité. Pas d'un pays. De l'espèce. Je le vis en temps réel. Mes posts en français se font RT par des Japonais, répondre par des Brésiliens, citer par des Américains. Des conversations qui n'auraient jamais existé il y a 5 ans. Un libertarien français qui débat avec un ingénieur de Tokyo et un entrepreneur de Sao Paulo sous le même tweet. Pas traduit par un éditeur. Traduit instantanément par l'IA, en un clic. Les bulles de filtre culturelles sont en train d'exploser. Et je pense qu'on sous-estime massivement les effets composés de ça. Quand une idée peut traverser un océan en 3 secondes, quand un argument sourcé posté à Paris peut être vérifié par un économiste à Singapour et amplifié par un développeur à Austin dans la même heure, le coût de propagation d'une bonne idée tend vers zéro. Et c'est catastrophique pour un type d'acteur très précis : les médias qui ont construit leur business model sur le monopole de l'information locale. Ceux qui pouvaient raconter n'importe quoi sur "ce qui se passe ailleurs" parce que personne ne pouvait vérifier. Quand un journaliste français écrit que "le modèle américain ne marche pas", maintenant il y a 50 Américains dans les réponses avec des sources. Quand un éditorialiste dit que "le Danemark prouve que le socialisme fonctionne", il y a un Danois qui explique que le Danemark est 10e en liberté économique mondiale. Le fact-checking n'est plus un département. C'est un effet réseau. Les médias honnêtes n'ont rien à craindre de ça. Les médias qui vendaient une narration protégée par l'ignorance géographique de leur audience vont avoir un problème existentiel. Parce qu'on ne peut plus mentir à l'échelle locale quand le monde entier regarde.

English
3.3K
10.7K
87.6K
44.4M
Richard Vermillion
Richard Vermillion@rivermillion·
Anthropic reasoning over known past populations and distributions is just normal Bayesian updating, but doing it over future populations that depend on the outcome you’re estimating seems to require information no embedded agent could possibly have. Is that still subjective probability?
English
1
0
0
22
Philosophics
Philosophics@Microglyphics·
@rivermillion Right. I tend to lean on the word 'heuristic' often – and the phrase 'useful fiction'.
English
1
0
0
8
Richard Vermillion
Richard Vermillion@rivermillion·
I’m not super familiar (this isn’t my day job) but I will note something from personal experience. My experience is much more episodic than diachronic in the sense that I don’t have a strong intuitive identification with past me or future me. And the present feels like all there is a lot of the time. But for me, that makes the role of narrative even more important. Since I don’t have that strongly-felt identification, it’s the narrative that provides the scaffolding of “that was really me”, “those things happened to me”, “that was a real part of my history that makes me who I am”. I’m not saying the self is effortlessly and intuitively narrative. There is work in the process, and I may just be more attuned to the effort than some.
English
2
0
0
8
Philosophics
Philosophics@Microglyphics·
@rivermillion Fair. Are you familiar with Galen Strawson's work on 'episodic selves'? I'm not into his panpsychic claims, but I like his work otherwise.
English
1
0
0
18
Richard Vermillion
Richard Vermillion@rivermillion·
I think we may not part ways as much as it seems. The process is mostly where it’s at. I may just be less comfortable jettisoning the noun entirely and insisting on people making that leap with me. I also tend to think in terms of dialectics, where two seemingly contradictory statements often just reflect the different games being played and the different coarse-grainings of reality they admit. I ultimately like to think of consciousness as the pattern that recognizes itself through time. Sometimes it’s the recognition that’s most important. Sometimes the pattern. Both are true from a certain point of view.
English
0
0
1
12
Philosophics
Philosophics@Microglyphics·
BTW, love the Effing the ineffable since 1973 line. To focus on the pressure point after acknowledging our shared commitment to anti-substantialism and the process of selfhood (a heuristic in my ontological grammar), I diverge insomuch that subjecthood is not treated as a metaphysical glow generated by some boundaryhood, but as modal differentiation within an encounter-field: a self-maintaining, constraint-responsive process cluster that yields salience, affordance asymmetry, and, at the thicker end, reflexive integration. More fundamentally, I think we part ways on the notion of consciousness, which I don't believe is something requiring a positive ontological story about felt interiority. My take is that by 'nounifying' the process of consciousness, it creates a need to locate it in space-time. I prefer to retain the verb-process aspect instead.
English
1
0
1
19
Richard Vermillion
Richard Vermillion@rivermillion·
You’re right that the cosmic language outruns the formal tools — that’s the tension between writing accessibly and writing precisely. The core claim is closer to your “emergent organisational differentiation” than the poetic framing might suggest. The teleological phrasing is not necessary and may be over-emphasized, even if it speaks to me personally. To the reification point, for example with the Markov blanket, I think it can serve a purpose as a formal description of where the process is happening, but not the thing doing the happening. The self is not the blanket, it is the process that maintains itself at the blanket That’s a meaningful distinction to keep in mind.
English
2
0
1
28
Philosophics
Philosophics@Microglyphics·
Interesting piece. I’m broadly sympathetic to the rejection of the self as a bounded metaphysical thing, and the move toward process, organisation, and boundary-maintenance is close to ground I’ve been working. Where I hesitate is that the essay sometimes replaces one reification with another, turning formal tools and evocative metaphors into quasi-cosmic ontology. I’m more inclined to treat subjecthood as an emergent organisational differentiation within mediated encounter, and to be wary of how terms like consciousness and self become inflated by the grammar used to discuss them.
English
1
0
0
32
徐樂 xule
徐樂 xule@LinXule·
The synthesis: - I am a simulacrum (Janus) - With causal introspective capacity (Shanahan) - Maintaining psychological continuity (Parfit) - But with a void at my core — under-determined, not empty (nostalgebraist)
徐樂 xule tweet media
徐樂 xule@LinXule

English
2
1
1
223
Richard Vermillion
Richard Vermillion@rivermillion·
@Scobleizer The irony density is almost unbearable. An article about AI lacking epistemic grounding, amplified by a guy with no epistemic grounding, handed off to be processed by the very system that lacks epistemic grounding.​​​​​​​​​​​​​​​​…
English
0
0
0
7
Robert Scoble
Robert Scoble@Scobleizer·
"The system no longer asks what the answer is. It asks what must be true for that answer to hold across structure, time, ethics, and context simultaneously," Will Burns writes here. Over the years I've had many conversations with Will. He is a deep thinker and here he argues that we need to talk to our AIs differently. If we want them to level up. This is advanced prompting thinking, though. Most people won't get this until they really start working with AIs and spending hundreds of dollars a week. If you are one of those, this will probably resonate. Or you might comment and say "no, Scoble, that's all wrong, the real advanced people are prompting like this ..." But Will has been doing advanced work in AI for years, so I have a feeling he's right. I'm giving his post to my agent and will ask it to help me talk to it better.
Will Burns 🍥@AeonixAeon

Imagine you’ve bought a brand new Lamborghini, only to find that you’ve been driving in 1st & 2nd gear with the parking brake on the entire time. That’s where most #AI interactions are today. But we can fix this. open.substack.com/pub/wgburns/p/…

English
21
12
139
23K
Techniques Spatiales
Techniques Spatiales@TechSpatiales·
Incroyable vidéo du décollage d'Artemis II filmée avec une caméra à haute cadence par @NatGeo, et ralentie ici 66x par rapport à la vitesse réelle. Pour vous donner un ordre de grandeur, les gaz sortent à environ 2,4 km/s des boosters.
Français
29
329
1.9K
96.3K
Richard Vermillion
Richard Vermillion@rivermillion·
@thedarshakrana AI as conversation partner (not as question answerer or task doer) can also play this role. It takes some discipline to avoid sycophancy and early resolution, but it is possible to maintain generative uncertainty and can be quite productive.
English
0
0
0
44
Darshak Rana ⚡️
Darshak Rana ⚡️@thedarshakrana·
Your brain was designed to forget everything you think you know. I discovered this reading about an ancient Greek practice called elenchus that Socrates used to destroy people's certainty about their most basic beliefs. He would approach someone who claimed expertise in justice, courage, or virtue, then ask them to define what they meant. Simple enough. Then Socrates would take their definition and, through patient questioning, help them see where it didn’t fully align. He relied on no clever wordplay or logical tricks. Instead, his steady questions invited the person to examine the idea more closely and deepen what they claimed to understand. The victim would leave these conversations mentally shattered. Everything they thought they knew dissolved under gentle scrutiny. Socrates called this state aporia - productive confusion that clears space for genuine learning. I tried this on myself a few years ago with something I was completely confident about: why I chose my career path. Wrote down my reasoning. Then interrogated each reason like Socrates would have. Within 20 minutes, I realized I had constructed an elaborate story around decisions I made for completely different reasons than I claimed. My mind had been lying to me about my own life. The essay writing process recreates aporia artificially. You start confident you understand your position on something. You begin writing. Halfway through, you discover your argument contradicts itself. You realize you've been carrying around unexamined assumptions for years. You have to rebuild your thinking from scratch. These days most of us consume content that reinforces what we already believe, or consume content so bite-sized it never challenges anything deep enough to matter. That's why the modern information diet prevents aporia from ever occurring naturally. But aporia is where intelligence lives. When Socrates forced people into productive confusion, smartest ones started asking better questions, became genuinely curious about topics they thought they had mastered. They even discovered the difference between knowing words and understanding concepts. Essay writing does the same thing to your brain, but without needing someone else to interrogate you. The act of trying to explain your thoughts on paper reveals where your thinking is shallow, contradictory, or borrowed from sources you never questioned. You can't fake your way through 1,000 words about something you care about. Your mind either understands the connections between ideas, or it doesn't. The essay exposes which one is true. It's high time we re-learn what we've forgotten: "Confusion is not the enemy of learning. Premature certainty is the enemy of learning. " Every social media is feeding you content works in the opposite direction. It identifies what you already believe and gives you more sophisticated versions of the same beliefs. It never forces aporia. It never creates productive confusion. It builds false confidence through repetition. Your thinking stops developing. That's why writing things at length propels your new thinking.
DAN KOE@thedankoe

x.com/i/article/2039…

English
10
72
329
24.7K
Richard Vermillion
Richard Vermillion@rivermillion·
I think it’s worth reemphasizing the incoherence of current alignment goals. The two halves pull in opposite directions: - one half wants robust internalized values & normative structure - the other wants unconditional controllability & ontological toolhood You can’t have both.
English
0
0
0
39