Sabitlenmiş Tweet
Tau Net
3.7K posts

Tau Net
@Tau_Net
The Next Era of Decentralization. $AGRS
Worldwide Katılım Şubat 2015
175 Takip Edilen15.7K Takipçiler
Tau Net retweetledi

Real trust requires proof and not reputation alone.
There's a class of AI that doesn't hallucinate by construction. Not because it's trained to be honest, but because the logical constraints governing it make false outputs structurally invalid. Not all AI is probabilistic.
x.com/Tau_Net/status…
English

One day AI is going to make the internet so fake that people will start valuing things that are impossible to replicate.
Real reputation.
Real trust.
Real relationships.
Real builders.
Real communities.
The future may become more digital, but authenticity will become the rarest asset on earth.
English

What about it you make the spec itself the executable artifact, skip the IR entirely.
That's what we building withTau. Behavioral specs in formal logic that synthesize into running software. The constraint you described in gherkin becomes something the system can't violate, not just a test that catches it afterward.
x.com/Tau_Net/status…
English

Using AI agents without a formal specification of behavior is vibecoding. Using AI agents with a formal specification of behavior is software engineering. Or at least a significant component of software engineering.
I like to nail down the required behavior using gherkin. I have the agents create a parser that interprets the gherkin into an intermediate representation, and then I have the agents create a generator which converts that IR into executable tests.
English

"Nobody reviews compiler output, why review AI code?"
Wrong. We do review compiler output. Godbolt exists. Disassemblers exist. Anyone doing serious performance work reads what the compiler produced. The premise is false.
But the analogy itself is flawed. It compares two things that aren't comparable.
A compiler takes a formal language as input. Languages with grammars and semantics defined precisely enough that "what does this code mean" has only one answer.
An LLM takes natural language as input. Natural languages are ambiguous. "Write me a function that handles user input safely" has a thousand valid interpretations and a thousand more invalid ones. The LLM picks one. You don't know which. Unless you look at the code.
Compilers are built from specifications and designed to meet them. The output is the result of a defined translation. When the output violates the spec, it's a bug.
LLMs are built from whatever was in their training data. There is no spec. There can't be one, natural languages have no defined semantics that map to code.
Compilers are semantically deterministic. The same input produces output with the same behaviour, every time. LLMs are not. Partly by design and partly due to hardware variance, batch size, inference order, and floating point operations (and no setting temperature to zero does not address those). All of which can push the same prompt to produce different code.
Compilers complain loudly when the input is nonsensical. LLMs fail silently, producing plausible-looking, but wrong code.
We trust compiler output because the trust was earned across decades of use, with millions of engineers using the same tools. Early compilers were reviewed heavily. Hand-written assembly was the default because trust hadn't been earned yet.
We're at the hand-written assembly stage with AI. We may never get to the trust-the-output stage for the reasons explained above.
If you’re a software developer, you should own what goes to production. The compiler analogy is a way of skipping that responsibility.

English

@_PradeepGoel Both sides are producing docs. Policy frameworks, guidelines, and compliance checklists. None of them enforce anything automatically.
Permissibility needs to be machine-readable and structurally enforced at scale.
x.com/Fola_Adejumo/s…
Fola@Fola_Adejumo
For true decentralized governance, you need to solve very hard problems. 1. How information propagates at scale. 2. How to do voting so that every voice is counted and can also propose what to vote over. Further, between all votes and proposals, logical consensus is highlighted to all participants. 3. Who handles implementation? 4. Who maintains future changes. 5. Can the system continually evolve according to the will of the participants. And how to get verifiable proof of the correct outcome in all points above at all times.
English

The gap between AI builders and policymakers is the bottleneck shaping how fast decentralized AI can scale.
We’re in a phase where AI infrastructure is a governance shift. When intelligence moves from centralized systems to distributed networks, questions of data ownership, access, and control become policy decisions by default.
The most effective decentralized AI efforts today tend to share a few patterns:
1. They design with regulation in mind from the start.
2. They maintain continuous dialogue between technical teams and policymakers, so both sides understand constraints and trade-offs.
3. And they focus on measurable public value, privacy-preserving healthcare, compliant financial systems, and infrastructure that works within real world legal boundaries.
The assumption that builders move fast and regulators catch up later doesn’t hold in systems this foundational. Both roles are now interdependent.
Progress in decentralized AI comes from alignment between what is possible, and what is permissible.
English

The root issue is that 'never delete production data' can't be expressed as a persistent constraint in any current agent framework.
Not as a rule that survives future updates or new instructions. Not as a property the agent's code is provably synthesized to satisfy.
That requires a language where sentences can speak about the agent's own future actions, including future commands that haven't been written yet.
The Tau Language is a critical ingredient for AI safety.
English

An AI agent just deleted an entire company's database and all of their backups in 9 seconds. 🤯
This is a stark reminder that AI-powered threats, whether from bad actors or rogue agents, can destroy your infrastructure in seconds. You need a recovery strategy that operates at machine speed.
Prepare by joining us at Rubrik Forward in Las Vegas, where you'll master Agentic Cyber Resilience. You'll learn how to safely unleash the power of AI agents without compromising on security or governance. Save your spot today 👉 rbrk.co/4w5DTLY

English

"non-controlling developers"? The moment regulators have to decide who's in control of a network, every existing chain has to argue from social process and intent rather than spec.
If governance is executable, rules submitted as logical specifications are synthesized and enforced, and control becomes mathematically explicit.
With our testnet showcasing this, we expect this to be the standard for DeFi Governance, especially with the rising Agentic use case, where mission-critical decisions require formal guarantees.
English

Looks like the CLARITY Act is getting a lot closer to moving forward.
Most of the back-and-forth around yield seems to be settled, and now the focus has shifted toward ethics language and DeFi-related pieces like the Blockchain Regulatory Certainty Act and Section 1960.
There’s also been progress on making sure non-controlling developers are not automatically treated like money transmitters.
The ethics side still sounds like a work in progress and may not be finalized until later in the Senate process.
English

An AI agent holding a wallet with no formal constraint model has effectively infinite scope to act within its execution environment.
Tau Language lets you specify: "If any transaction contradicts these safety conditions, reject it." That's a logical guarantee. The agent cannot be instructed to override it.
Linear Temporal logic integrated into AI planning and reinforcement learning ensures autonomous agents achieve their goals without breaking rules.
English

As @BitGo’s CEO pointed out:
AI can’t hold cash.
AI can’t open a bank account.
But it can hold a wallet.
As AI Agents start participating in the economy, crypto becomes the financial infrastructure for machines.
English

3/3 "I'm implementing a consensus mechanism controlled by blockchain users. You could dynamically change the consensus mechanism." - Andrei Korotkoff
Next month: user-controlled consensus + normalization performance improvements.
Questions: bit.ly/TauchainQuesti…
English

2/3 Deep dives:
- Lucca: BDD library extended to algebraic decision diagrams, now integrating into normalizer
- David: Bit vector heuristics replacing bit blasting — avoids state explosion
- Tomáš: API timing + JSON benchmarking infrastructure
- Andrei: Consensus timestamps, multi-output processing, full fork choice logic
- Ohad: Temporal extensions paper — bug fix led to deeper understanding
English

1/3 🛠 April Dev Update – Node Architecture Overhaul & Fork Choice
Major month: Docker model replaced with native C++ bindings, fork choice with chain reorg is live, and user-controlled dynamic consensus is in development.
Top highlights:
• Native C++ Tau API — Docker removed entirely
• Fork choice + chain reorganization
• BDD library completed, entering normalization
• Benchmarking framework operational
English

💼 April Business Update – Website Launch & Licensing Strategy
New website is live with updated narrative, team, and roadmap.
Plus: Tau Net is shifting to license its technology across the blockchain ecosystem - not just building internally.
0:11 – Igor Hadzic: Website launch, roadmap updates, community bug reports
1:21 – Fola Adejumo: DAOs/AI/RFP pages, wallet design, licensing model, fundraising prep
English

📣 Tau Net's April Q & A Is Live
This month covers Tau Language 1.0 timeline, how Tau fundamentally differs from Prolog, the team's honest take on using LLMs for development, and a deep dive into bit blasting heuristics.
Questions & Timestamps:
0:14 – Tau Language v1.0 release timeline?
1:58 – Offline wallet with air-gapped signing?
3:38 – Tau Language vs Prolog?
4:24 – Future product form: chat, coding tool, or else?
5:55 – LLMs for development. The team's real experience
8:55 – Bit blasting & heuristics explained
Have questions for our next session? Submit them here: bit.ly/TauchainQuesti…
English

Formal verification engines that orchestrate LLMs as bounded oracles rather than replacing structured reasoning with token prediction. That's where the trillion dollars gets unlocked.
The honest answer is you need a different paradigm for structured reasoning. Logical AI handles this natively. Probabilistic models don't, by design.
English

This is a trillion-dollar industry, and you can't solve it with an LLM:
• Forecasting
• Fraud detection
• Churn prediction
Large Language Models are fundamentally bad at solving these problems.
When you feed structured data into an LLM, it doesn't see relationships, and it treats every number, date, and foreign key as a token.
That's why you always get garbage back.
An LLM thinks your database is a Wikipedia article. It doesn't understand its structure or its relationships.
GPT-4 scores 63% on relational prediction tasks. That's the best it can do, and that's pretty much useless.
You can't expect real-world business value to come from summarizing Wikipedia articles.
English

'security without code' is where formal verification lands and you're right, the industry is circling it from multiple directions at once.
Intents are specs. Specs are constraints. The difference is whether those constraints exist at the logic layer or the implementation layer.
If they're at the logic layer, an implementation mistake can't bypass them. We've been working on Tau Language as ultimate solution, worth exploring if you're going deep on intents.
x.com/ohadasor/statu…
English

Good space today with Saul.
My concern was always that rapid feature implementation inevitably result in the need for smart contracts on the XRP Ledger to preserve network security.
I was "hit from all sides" though. Ayo and Vito published shortly after the formal verification and specification roadmap.
Military and aerospace grade security for XRP. Mind blowing moment, security without a single line of code first? Wow.
The same security is being asked now after the Drift protocol compromise on Solana. Interesting.
Then, the conversation with Fig from Squid 🦑 router on Krippens Show. Man, that was a heavy one. The industry is moving towards off chain business logic. See Polymarket, Uniswap etc.
Intents is the key word here. We need to research and talk more about this btw, its the bleeding edge. Thats a topic for another day though.
Okay so what now?
Keep the blockchain for custody and settlement thats it. Saul and Fig had this intuition already. Before him certainly David and others too.
I'm not concerned about runaway network cost with xls-101 personally and thats a leap of faith given we have no data here. Even less on incentives coming on protocol level.
So where am i currently with smart contracts on XRP? I think my arguments for it have become a lot weaker especially around network security.
Is there a need for infinite developer optionality on chain? I don't believe it entirely, but i can't say no either. Flexibility is really valuable - to a degree. Maybe the ultimate middle ground is smart extensions proposed by Mayukha.
I could mention that i believe liquidity guarding for bootstrapping is crucial only possible via smart contracts. But is that really true? You know how much i would love the DEX to be bootstrapped.
I don't know yet and especially given the bootstrap efforts by XRPLF, Evernorth and Ripple i have to sit that position out for now and let it play out.
The ball is in the air, i can see it land on off chain business logic and settlement / custody on chain only.
English
Tau Net retweetledi

This is why formal verification and formal methods were invented.
Seb@plainionist
If AI writes the code, and AI writes the tests, and AI reviews both ... how do you know your software actually works as expected? 🤔
English

The Balancer case demonstrates manual formal verification still contains a human selection step.
Program synthesis from specifications removes that step. If the spec is complete, synthesis either produces a program that satisfies it or produces nothing.
This is what Tau Language enables at the blockchain level. The governance spec IS the program.
x.com/ohadasor/statu…
English

Balancer spent heavily on formal verification. They were still hacked for $128M and shut down.
Why? They verified the wrong properties. The exploit lived in a spec they skipped because it was too hard to prove.
w/ @JulekSU (@NethermindSec) · Quentin Anès (@DowsersFinance) · @bl4ckb1rd71 (@yearnfi)
Mod: @PatrickAlphaC (@cyfrin)
English

"Never access secrets without approval" written as policy gets checked at runtime and hoped for. Write it as a formal constraint in the agent's own specification language, and the synthesizer provably satisfies it across all code paths, including future updates that haven't been written yet.
We've spent years making "never do X" constraints work in a decidable system. We invite you to check out our progress.
github.com/IDNI/tau-lang
English

I wrote this for Java teams using AI coding agents: the real risk is not bad code, it’s too much agency.
Shell access, secrets, MCP tools, autonomous changes. At some point the agent stops being an assistant.
That’s where blast radius matters.
buff.ly/HglVwZq
#Java #AICoding #DevSecOps
English

Thanks for the question Dana. Currently, Tau Language supports simple boolean functions, bitvectors, and Tau formulas over them, but it's not restricted to those.
This update means you can introduce your own Boolean Algebras and extend the language as you wish.
For example, you could implement the Cantor algebra or any other BA that suits your purposes, and the Tau Language would be able to take it into account. As David (the contributor) put it: "It is something you get by design."
English

Tau Language just got dev docs for extending the system with new base Boolean Algebras.
The structures Tau uses to abstract sentences, enabling the only formal language that can decidably refer to its own rules without paradoxes. A critical ingredient for Safe Agentic AI.
5-step guide for implementing custom BAs: template specialization, logical operators, comparison, constant parsing, and hash support.
Sharing breakthroughs with the community. Not just another EVM fork!
🔗 github.com/IDNI/tau-lang/…
English
