Sam

119 posts

Sam banner
Sam

Sam

@SemanticSamuel

Rediscovering language from first principles @BuildCoherence | Formal verification maxxing

Katılım Şubat 2026
20 Takip Edilen15 Takipçiler
Sam
Sam@SemanticSamuel·
@Shadow_Rebbe > maybe the problem is in the communicators capacity to send/receive I concur. Jargon is helpful here to semantically compress into terms, but without the right communication infrastructure, these terms shift over time, making the effective limit of semantic compression quite low
English
0
0
0
1
Sam
Sam@SemanticSamuel·
It sounds like you're using "models" to mean "accurately models". I'm not making that claim. Natural language is indeed a poor model of reality. The point is that language is an attempt to model reality. If we want to communicate more effectively, we really need to do a better job collaboratively modeling reality.
English
1
0
0
2
Sam
Sam@SemanticSamuel·
@VesselOfSpirit "You're right to be worried about this problem, but your solution would make things worse"
English
0
0
0
6
Vessel Of Spirit
Vessel Of Spirit@VesselOfSpirit·
this isn't correct or directionally correct but it's accelerationally correct
English
3
2
63
1.6K
Sam
Sam@SemanticSamuel·
@DefenderOfBasic FALSE. It contains information about Gary's semantic graph.
English
0
0
0
10
Defender
Defender@DefenderOfBasic·
TRUE OR FALSE: the reply to this essay contains zero information content (this is a trick question, but there is a definitive correct answer)
Defender tweet media
English
5
1
10
880
Defender
Defender@DefenderOfBasic·
rate my pitch?
Defender tweet media
English
11
1
20
1.1K
Sam
Sam@SemanticSamuel·
> somewhere between the ~10 line Haskell implementation and the ~1 line agentic prompt, there lies a representation language that distills the task down to its cognitive essence, removing (almost) all of the unproductive frictions, but retaining (almost) all the productive ones.
Jonathan Gorard@getjonwithit

Now that I'm back in Princeton following the @DARPA expMath kickoff event, I'm beginning to collect my thoughts on the future of autoformalization, AI for Math, and AI for Science more broadly. Here's where I've got to. "On productive and unproductive frictions"

English
0
0
1
48
Sam
Sam@SemanticSamuel·
@DefenderOfBasic What about omitting information that disproportionately benefits oneself?
English
0
0
1
9
Defender
Defender@DefenderOfBasic·
no more lying
English
4
0
31
1.5K
47fucb4r8curb4fc8f8r4bfic8r
47fucb4r8curb4fc8f8r4bfic8r@47fucb4r8c69323·
Jokes like this are funny because they point out the gap between strict deductive logic and social ontology. Technically, if this is true, the baby will have that large number as its balance. But we find this funny because we know, instinctively, if this were discovered, it would be "fixed" and the baby would have the "correct" net worth: zero. But why is zero the fixed value and this isn't the true value? Why isn't it the other way around? Because our society privileges a social ontology (it's what Barry Smith does and he claims to be a philosopher, but it's really just bean counting). That is to say, we do things with words and one of those things is declarative: "this baby now has X value" is something human beings do, it is not something that is asserted by pointing to numbers on a screen. The gap between the two was something that fascinated Kafka and Durkheim as well as the Russian Soviet novelists I don't know too well (Master of Margarita does this though). The notion of bureaucratic horror that those writers play with hinges on this gap between logic and social ontology. That gap, by the way, is why your AI is not conscious and cannot do anything and why the functionalists, EA idiots, and rationalists are all morons and you should stop listening to them losers lmao byeeeeeeeee
𝕡𝕨𝕟.𝕋∅𝕔𝕙!@0day_ninja

Big brain move.

English
8
3
54
3K
Sam
Sam@SemanticSamuel·
What's up with all these words where the literal meaning is a metaphor? How are we going to get anything done in a semantic environment like this?!
English
0
0
0
26
Sam
Sam@SemanticSamuel·
People need to be ontology pilled.
English
0
0
0
21
Sam
Sam@SemanticSamuel·
I'd have to know more about what you're doing. You might be able to say e.g. "formalizing in ZFC", and that probably wouldn't be misunderstood. I'd also say maybe you SHOULD formalize it in Lean. As a SWE by trade and amateur mathematician, it's actually made some math easier to understand. Often, the hard part is formalizing the proofs of theorems, not actually expressing the theory itself. So you could express the theory. Happy to provide some pointers if you've got something you can share about your approach so far.
English
1
0
1
19
Sun 乌龟 💖
Sun 乌龟 💖@suntzugi·
I thought of that , and Mathematize Morality has a good consonant ring to it But it feels like more than that bc we tie it into ontology of domains like physics, qualia, etc So it's more cross disciplinary But maybe it's fine Which do you think is more accurate/precise of both? Any other alternatives?
English
1
0
3
31
Sam
Sam@SemanticSamuel·
@suntzugi @BuildCoherence No worries, the word "formalize" is now overloaded, but used to mean exactly what you're saying. I look forward to seeing what you produce.
English
1
0
2
27
Sun 乌龟 💖
Sun 乌龟 💖@suntzugi·
I think I need to shift my language to avoid this confusion. I'm currently focused on mathematizing axiology / morality, in the same way physics grew from natural philosophy through measurable and useful applied mathematical frameworks. modern formal proofs came after, and likewise I think that's out of my scope but expect corroboration / invalidation from the formalizing community after publications etc (in addition to empirical testing etc)
English
1
0
4
65
Sam
Sam@SemanticSamuel·
How is it that people are so insistent on humans being conscious and AI being definitely not conscious, when they can't even define it? One explanation is that their semantic weight is not on consciousness as a definition, but on the downstream implications of the label.
English
0
0
0
33
Sam
Sam@SemanticSamuel·
@benjamiwar Why not focus on one very specific area first, that way you can afford the tokens to test it? Then you can refine based on your testing. If you can get it to produce interesting results in one area, people may help fund the rest.
English
0
0
0
8
Benjamin Ward
Benjamin Ward@benjamiwar·
I believe I’ve figured out a way to automate all philosophy with my reasoningtool project. It works by generating questions (over a million so far), generating guesses (not started yet because I don’t have enough AI usage), then checking the guesses it’s generated through various algorithms (ARAW, selection skill for instance). There are over 500 skills for generating guesses and checking guesses, so I’m certain I’ll be able to find the right automated method, or I’ll just keep adding more skills until I do. If you disagree or think I’m stupid, please let me know why. If you don’t, I will assume my approach is correct and continue on as I have been doing for the past couple months. I just hope that eventually it gains some traction. I believe the examples it’s produced are already evidence of how it can automate much of the thinking, writing, reasoning, and philosophical process. Oh no, maybe I’m making too grandiose of claims. Let me know if they hold up or not. The project is available here: github.com/benjam3n/reaso…
English
1
0
3
124
Sam
Sam@SemanticSamuel·
Do you understand?
Sam tweet media
English
0
0
0
25