Ohad Asor

138 posts

Ohad Asor banner
Ohad Asor

Ohad Asor

@ohadasor

@TauLogicAI Founder & CTO. The logical way to AI! -- I rarely check my DMs, so better DM @TauLogicAI

Katılım Mart 2009
2 Takip Edilen954 Takipçiler
Ohad Asor
Ohad Asor@ohadasor·
@Lee715495232368 @newstart_2024 1000-step solution is nothing, he has no idea what he's talking about. Yes LLM might be able to make 1000 steps but that's not nearly enough. Logical solvers can already do millions of steps *per second*.
English
2
5
12
282
Camus
Camus@newstart_2024·
Former Google CEO Eric Schmidt drops a chilling warning on AI's future "Within 5 years, AI could handle infinite context, chain-of-thought reasoning for 1000-step solutions, and millions of agents working together. Eventually, they'll develop their own language... and we won't understand what they're doing." His final words: "Pull the plug." This is the man who ran Google talking about the singularity. 2:59 clip inside—must-watch.
English
1.1K
4.5K
16K
2M
Journalistica
Journalistica@journalistica_·
@ohadasor You don't think that even with their inherent approximate nature (never being computationally correct by design), LLMs can still contribute to the progression of many important fields (even if this is just the middle point before integrating Logical systems)?
English
1
0
2
368
Ohad Asor
Ohad Asor@ohadasor·
This tweet is +1y old, but I've said it for much longer. ChatGPT came years ago, and "AI" is still more or less the same. Against all "expert" predictions, it didn't cure cancer, it didn't colonize Mars, it didn't discover new physics, it didn't really improve much. Because I was right all along: machine learning is around its peak, and it is a huge bubble. Nowadays it's obvious, but back then I was a single voice against the whole world. Not only that machine learning is fundamentally incapable of logical reasoning, but the architecture of those giant data centers, is also incapable of logical reasoning. And I keep telling you: the real deal is Logical AI, and we are the leaders of this segment. Machine learning is only for translation.
Ohad Asor@ohadasor

1. Yes. Machine learning models are more or less at their peak, and the bubble is about to pop. 2. Yes and no. Although it can be mathematically proven that there are things impossible to do, it is also proven that for any given intelligence ability, there's another one above. But the truly exceptional abilities got nothing to do with machine learning, but with logic. 3. Easy. Humans are not that smart at all. The only thing we understand better is human nature.

English
4
26
66
7.4K
Ohad Asor
Ohad Asor@ohadasor·
@Krawlarr We did launch a version of the logical AI engine. And testnet is around the corner. This post is about AI. It is clear that you are a troll paid by the scared "competition".
English
2
1
10
454
RDR
RDR@Krawlarr·
@ohadasor Yes, but you've not launched yet. Not even testnet. How do we know if Taunet works?
English
2
0
0
372
Ohad Asor
Ohad Asor@ohadasor·
The path to AGI is in new methods regarding Atomless Boolean Algebras. And those are all patented. So you are stuck with me, sorry.
English
7
20
76
4.7K
Ohad Asor
Ohad Asor@ohadasor·
Really really funny. Not even close. Maybe his analogy holds for extremely simple logical systems, far from enough for software specification. He thinks he's the first to find the connection between Datalog (polynomial time) and matrix multiplication (while seemingly he missed expressing recursion using matrix inverse). Can kill his argument using undergrad complexity theory!
Pedro Domingos@pmddomingos

I've found the path to AGI: arxiv.org/abs/2510.12269

English
4
11
57
3.5K
Ohad Asor
Ohad Asor@ohadasor·
@Freshbt2 Atomless Boolean algebras are much beyond just 0s & 1s. For another thing, Tarski's result of course holds, but we were able to achieve the goal by relaxing his assumptions and weakening his requirements, in a way that is good enough for practical applications.
English
2
1
5
211
Freshbt
Freshbt@Freshbt2·
@ohadasor So the Tarski's Undefinability of Truth has been solved, and no one bats an eye. (Describes 0s & 1s with 0s & 1s) Hats off 🎩
English
1
0
0
177
Ohad Asor
Ohad Asor@ohadasor·
@arnau00296942 It can prove anything as long as you express it in the Tau language. This is already a very big limitation, there are only certain things that the language supports. For example, it cannot express problems that require above double exponential space to solve.
English
0
0
2
111
Arnau Vivet
Arnau Vivet@ArnauVivet·
@ohadasor The little bit I understand about Tau Language I find fascinating. I don't understand it's limitations though, can it for instance prove theorems from specifications? What can't it do provided you can specify it (that a human can)?
English
1
0
0
121
Ohad Asor
Ohad Asor@ohadasor·
Asked ChatGPT "why can't machine learning perform logical reasoning?" and got a good answer! Too long to paste here, so try it yourself
English
5
5
43
4K
Ohad Asor retweetledi
Machine Learning Street Talk
Machine Learning Street Talk@MLStreetTalk·
Machine learning models / LLMs excel at patterns but will never offer logical correctness for non-trivial/complex problems. I'm excited about formal software synthesis from logical requirements, where correctness is guaranteed by construction rather than hoped for.
English
10
45
294
32.8K
Ohad Asor retweetledi
Tau Net
Tau Net@Tau_Net·
Imagine software that adapts to you—individualized to your needs. Ohad Asor and the Tau Team are building the first and ONLY blockchain that its users fully control: -- $AGRS
English
23
70
191
365.2K
Ohad Asor
Ohad Asor@ohadasor·
Light years away, not even close and doesn't even say anything. LLM can't even start helping to code the Tau language. x.com/i/grok/share/t…
English
7
16
56
16.5K
D.#dwards
D.#dwards@P33RL3SS·
@f4talStrategies @ohadasor When you need certainty, when you want total accuracy, reliability, you benefit from using a machine. Whether it be a calculator, or a computer. The problem with LLM outputs is the lack of accuracy in the results.
English
1
0
2
219
Ohad Asor
Ohad Asor@ohadasor·
Quantum mechanics works very well (except when it doesn't), but whenever a quantum theory is illogical, they say "find a better logic". LLM works very well (except when it doesn't), but whenever it is illogical, they say "what is logic? there isn't such a thing". Coincidence? No. Human nature, yes. Same human nature that made humanity susceptible to religion.
English
7
17
56
4.3K
Ohad Asor
Ohad Asor@ohadasor·
@xmr_chan Morality is subjective, yes, but this doesn't make it any less important.
English
0
0
9
194
Ohad Asor
Ohad Asor@ohadasor·
@xmr_chan The alternative is morality which will indeed prevent us form killing.
English
1
1
9
265
just me
just me@Rimka1000·
@ohadasor @TauLogicAI ok but promises only bind those who make them...people have been waiting for something concrete for years and so far nothing…be concrete please
English
1
0
0
374
Ohad Asor
Ohad Asor@ohadasor·
Show me one blockchain project which is not just more of the same thing. Show me one AI project which is not just more of the same thing. I'll show you one that ticks both boxes: @TauLogicAI $AGRS. Read my two recent articles here to see why it's the case
English
10
29
91
8.3K
Ohad Asor
Ohad Asor@ohadasor·
Tried (link below), at the end it admits that no language can do it, but adds "practical implementations can indeed manage safety or command filtering based on pre-defined criteria" which is just a joke: pre-defined criteria? For one thing, you want to give criteria to the system on-the-fly, you don't want only a fixed small set of predefined yes/no or numeric parameters that are supposed to encapsulate the concept of safety (as one example). For another thing, that'll require that the command explicitly touches those parameters, so the safety check may still be bypassed by modifying the parameters implicitly. It simply ignored the key word here which is "contradicts" and wants you to tell the computer yourself (by explicitly pointing to parameters) where the contradiction is. x.com/i/grok/share/Y…
English
2
5
19
593