Sabitlenmiş Tweet
Sam
119 posts

Sam
@SemanticSamuel
Rediscovering language from first principles @BuildCoherence | Formal verification maxxing
Katılım Şubat 2026
20 Takip Edilen15 Takipçiler

@Shadow_Rebbe > maybe the problem is in the communicators capacity to send/receive
I concur. Jargon is helpful here to semantically compress into terms, but without the right communication infrastructure, these terms shift over time, making the effective limit of semantic compression quite low
English

It sounds like you're using "models" to mean "accurately models". I'm not making that claim. Natural language is indeed a poor model of reality. The point is that language is an attempt to model reality. If we want to communicate more effectively, we really need to do a better job collaboratively modeling reality.
English

@VesselOfSpirit "You're right to be worried about this problem, but your solution would make things worse"
English

@DefenderOfBasic FALSE. It contains information about Gary's semantic graph.
English


@DefenderOfBasic Here is that article
x.com/SemanticSamuel…
Also available as a PDF here: coherencelabs.net/papers/Reality…
Sam@SemanticSamuel
English

@DefenderOfBasic This has ended up taking longer than I expected due to the background concepts required to present a compelling argument. So, in the spirit of consistent publishing, I pivoted to talk about some of these dependencies.
x.com/DefenderOfBasi…
Defender@DefenderOfBasic
how to get your lab to be ORI compatible (if we can see your work, then it's ORI compatible)
English

> somewhere between the ~10 line Haskell implementation and the ~1 line agentic prompt, there lies a representation language that distills the task down to its cognitive essence, removing (almost) all of the unproductive frictions, but retaining (almost) all the productive ones.
Jonathan Gorard@getjonwithit
Now that I'm back in Princeton following the @DARPA expMath kickoff event, I'm beginning to collect my thoughts on the future of autoformalization, AI for Math, and AI for Science more broadly. Here's where I've got to. "On productive and unproductive frictions"
English

@DefenderOfBasic What about omitting information that disproportionately benefits oneself?
English

Jokes like this are funny because they point out the gap between strict deductive logic and social ontology.
Technically, if this is true, the baby will have that large number as its balance. But we find this funny because we know, instinctively, if this were discovered, it would be "fixed" and the baby would have the "correct" net worth: zero.
But why is zero the fixed value and this isn't the true value? Why isn't it the other way around?
Because our society privileges a social ontology (it's what Barry Smith does and he claims to be a philosopher, but it's really just bean counting). That is to say, we do things with words and one of those things is declarative: "this baby now has X value" is something human beings do, it is not something that is asserted by pointing to numbers on a screen.
The gap between the two was something that fascinated Kafka and Durkheim as well as the Russian Soviet novelists I don't know too well (Master of Margarita does this though). The notion of bureaucratic horror that those writers play with hinges on this gap between logic and social ontology.
That gap, by the way, is why your AI is not conscious and cannot do anything and why the functionalists, EA idiots, and rationalists are all morons and you should stop listening to them losers lmao byeeeeeeeee
𝕡𝕨𝕟.𝕋∅𝕔𝕙!@0day_ninja
Big brain move.
English

I'd have to know more about what you're doing. You might be able to say e.g. "formalizing in ZFC", and that probably wouldn't be misunderstood. I'd also say maybe you SHOULD formalize it in Lean. As a SWE by trade and amateur mathematician, it's actually made some math easier to understand. Often, the hard part is formalizing the proofs of theorems, not actually expressing the theory itself. So you could express the theory. Happy to provide some pointers if you've got something you can share about your approach so far.
English

I thought of that , and Mathematize Morality has a good consonant ring to it
But it feels like more than that bc we tie it into ontology of domains like physics, qualia, etc
So it's more cross disciplinary
But maybe it's fine
Which do you think is more accurate/precise of both? Any other alternatives?
English

@SemanticSamuel @BuildCoherence Is there a better term I could use to avoid this sort of confusion do you think?
English

@suntzugi @BuildCoherence No worries, the word "formalize" is now overloaded, but used to mean exactly what you're saying. I look forward to seeing what you produce.
English

I think I need to shift my language to avoid this confusion. I'm currently focused on mathematizing axiology / morality, in the same way physics grew from natural philosophy through measurable and useful applied mathematical frameworks.
modern formal proofs came after, and likewise I think that's out of my scope but expect corroboration / invalidation from the formalizing community after publications etc (in addition to empirical testing etc)
English

@benjamiwar Why not focus on one very specific area first, that way you can afford the tokens to test it? Then you can refine based on your testing. If you can get it to produce interesting results in one area, people may help fund the rest.
English

I believe I’ve figured out a way to automate all philosophy with my reasoningtool project. It works by generating questions (over a million so far), generating guesses (not started yet because I don’t have enough AI usage), then checking the guesses it’s generated through various algorithms (ARAW, selection skill for instance). There are over 500 skills for generating guesses and checking guesses, so I’m certain I’ll be able to find the right automated method, or I’ll just keep adding more skills until I do.
If you disagree or think I’m stupid, please let me know why. If you don’t, I will assume my approach is correct and continue on as I have been doing for the past couple months. I just hope that eventually it gains some traction.
I believe the examples it’s produced are already evidence of how it can automate much of the thinking, writing, reasoning, and philosophical process. Oh no, maybe I’m making too grandiose of claims. Let me know if they hold up or not.
The project is available here:
github.com/benjam3n/reaso…
English





